All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHSET] workqueue: reimplement high priority using a separate worker pool
@ 2012-07-09 18:41 ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	xfs, joshhunt00, davem, vgoyal, johan.hedberg

Currently, WQ_HIGHPRI workqueues share the same worker pool as the
normal priority ones.  The only difference is that work items from
highpri wq are queued at the head instead of tail of the worklist.  On
pathological cases, this simplistics highpri implementation doesn't
seem to be sufficient.

For example, block layer request_queue delayed processing uses high
priority delayed_work to restart request processing after a short
delay.  Unfortunately, it doesn't seem to take too much to push the
latency between the delay timer expiring and the work item execution
to few second range leading to unintended long idling of the
underlying device.  There seem to be real-world cases where this
latency shows up[1].

A simplistic test case is measuring queue-to-execution latencies with
a lot of threads saturating CPU cycles.  Measuring over 300sec period
with 3000 0-nice threads performing 1ms sleeps continuously and a
highpri work item being repeatedly queued with 1 jiffy interval on a
single CPU machine, the top latency was 1624ms and the average of top
20 was 1268ms with stdev 927ms.

This patchset reimplements high priority workqueues so that it uses a
separate worklist and worker pool.  Now each global_cwq contains two
worker_pools - one for normal priority work items and the other for
high priority.  Each has its own worklist and worker pool and the
highpri worker pool is populated with worker threads w/ -20 nice
value.

This reimplementation brings down the top latency to 16ms with top 20
average of 3.8ms w/ stdev 5.6ms.  The original block layer bug hasn't
been verfieid to be fixed yet (Josh?).

The addition of separate worker pools doesn't add much to the
complexity but does add more threads per cpu.  Highpri worker pool is
expected to remain small, but the effect is noticeable especially in
idle states.

I'm cc'ing all WQ_HIGHPRI users - block, bio-integrity, crypto, gfs2,
xfs and bluetooth.  Now you guys get proper high priority scheduling
for highpri work items; however, with more power comes more
responsibility.

Especially, the ones with both WQ_HIGHPRI and WQ_CPU_INTENSIVE -
bio-integrity and crypto - may end up dominating CPU usage.  I think
it should be mostly okay for bio-integrity considering it sits right
in the block request completion path.  I don't know enough about
tegra-aes tho.  aes_workqueue_handler() seems to mostly interact with
the hardware crypto.  Is it actually cpu cycle intensive?

This patchset contains the following six patches.

 0001-workqueue-don-t-use-WQ_HIGHPRI-for-unbound-workqueue.patch
 0002-workqueue-factor-out-worker_pool-from-global_cwq.patch
 0003-workqueue-use-pool-instead-of-gcwq-or-cpu-where-appl.patch
 0004-workqueue-separate-out-worker_pool-flags.patch
 0005-workqueue-introduce-NR_WORKER_POOLS-and-for_each_wor.patch
 0006-workqueue-reimplement-WQ_HIGHPRI-using-a-separate-wo.patch

0001 makes unbound wq not use WQ_HIGHPRI as its meaning will be
changing and won't suit the purpose unbound wq is using it for.

0002-0005 gradually pulls out worker_pool from global_cwq and update
code paths to be able to deal with multiple worker_pools per
global_cwq.

0006 replaces the head-queueing WQ_HIGHPRI implementation with the one
with separate worker_pool using the multiple worker_pool mechanism
previously implemented.

The patchset is available in the following git branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-wq-highpri

diffstat follows.

 Documentation/workqueue.txt      |  103 ++----
 include/trace/events/workqueue.h |    2 
 kernel/workqueue.c               |  624 +++++++++++++++++++++------------------
 3 files changed, 385 insertions(+), 344 deletions(-)

Thanks.

--
tejun

[1] https://lkml.org/lkml/2012/3/6/475

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCHSET] workqueue: reimplement high priority using a separate worker pool
@ 2012-07-09 18:41 ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Currently, WQ_HIGHPRI workqueues share the same worker pool as the
normal priority ones.  The only difference is that work items from
highpri wq are queued at the head instead of tail of the worklist.  On
pathological cases, this simplistics highpri implementation doesn't
seem to be sufficient.

For example, block layer request_queue delayed processing uses high
priority delayed_work to restart request processing after a short
delay.  Unfortunately, it doesn't seem to take too much to push the
latency between the delay timer expiring and the work item execution
to few second range leading to unintended long idling of the
underlying device.  There seem to be real-world cases where this
latency shows up[1].

A simplistic test case is measuring queue-to-execution latencies with
a lot of threads saturating CPU cycles.  Measuring over 300sec period
with 3000 0-nice threads performing 1ms sleeps continuously and a
highpri work item being repeatedly queued with 1 jiffy interval on a
single CPU machine, the top latency was 1624ms and the average of top
20 was 1268ms with stdev 927ms.

This patchset reimplements high priority workqueues so that it uses a
separate worklist and worker pool.  Now each global_cwq contains two
worker_pools - one for normal priority work items and the other for
high priority.  Each has its own worklist and worker pool and the
highpri worker pool is populated with worker threads w/ -20 nice
value.

This reimplementation brings down the top latency to 16ms with top 20
average of 3.8ms w/ stdev 5.6ms.  The original block layer bug hasn't
been verfieid to be fixed yet (Josh?).

The addition of separate worker pools doesn't add much to the
complexity but does add more threads per cpu.  Highpri worker pool is
expected to remain small, but the effect is noticeable especially in
idle states.

I'm cc'ing all WQ_HIGHPRI users - block, bio-integrity, crypto, gfs2,
xfs and bluetooth.  Now you guys get proper high priority scheduling
for highpri work items; however, with more power comes more
responsibility.

Especially, the ones with both WQ_HIGHPRI and WQ_CPU_INTENSIVE -
bio-integrity and crypto - may end up dominating CPU usage.  I think
it should be mostly okay for bio-integrity considering it sits right
in the block request completion path.  I don't know enough about
tegra-aes tho.  aes_workqueue_handler() seems to mostly interact with
the hardware crypto.  Is it actually cpu cycle intensive?

This patchset contains the following six patches.

 0001-workqueue-don-t-use-WQ_HIGHPRI-for-unbound-workqueue.patch
 0002-workqueue-factor-out-worker_pool-from-global_cwq.patch
 0003-workqueue-use-pool-instead-of-gcwq-or-cpu-where-appl.patch
 0004-workqueue-separate-out-worker_pool-flags.patch
 0005-workqueue-introduce-NR_WORKER_POOLS-and-for_each_wor.patch
 0006-workqueue-reimplement-WQ_HIGHPRI-using-a-separate-wo.patch

0001 makes unbound wq not use WQ_HIGHPRI as its meaning will be
changing and won't suit the purpose unbound wq is using it for.

0002-0005 gradually pulls out worker_pool from global_cwq and update
code paths to be able to deal with multiple worker_pools per
global_cwq.

0006 replaces the head-queueing WQ_HIGHPRI implementation with the one
with separate worker_pool using the multiple worker_pool mechanism
previously implemented.

The patchset is available in the following git branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-wq-highpri

diffstat follows.

 Documentation/workqueue.txt      |  103 ++----
 include/trace/events/workqueue.h |    2 
 kernel/workqueue.c               |  624 +++++++++++++++++++++------------------
 3 files changed, 385 insertions(+), 344 deletions(-)

Thanks.

--
tejun

[1] https://lkml.org/lkml/2012/3/6/475

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH 1/6] workqueue: don't use WQ_HIGHPRI for unbound workqueues
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41     ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA, Tejun Heo

Unbound wqs aren't concurrency-managed and try to execute work items
as soon as possible.  This is currently achieved by implicitly setting
%WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
implementation is about to be restructured and this usage won't be
valid anymore.

Add an explicit chain-wakeup path for unbound workqueues in
process_one_work() instead of piggy backing on %WQ_HIGHPRI.

Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 kernel/workqueue.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9a3128d..27637c2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -580,6 +580,10 @@ static bool __need_more_worker(struct global_cwq *gcwq)
 /*
  * Need to wake up a worker?  Called from anything but currently
  * running workers.
+ *
+ * Note that, because unbound workers never contribute to nr_running, this
+ * function will always return %true for unbound gcwq as long as the
+ * worklist isn't empty.
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
@@ -1867,6 +1871,13 @@ __acquires(&gcwq->lock)
 	if (unlikely(cpu_intensive))
 		worker_set_flags(worker, WORKER_CPU_INTENSIVE, true);
 
+	/*
+	 * Unbound gcwq isn't concurrency managed and work items should be
+	 * executed ASAP.  Wake up another worker if necessary.
+	 */
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
+		wake_up_worker(gcwq);
+
 	spin_unlock_irq(&gcwq->lock);
 
 	work_clear_pending(work);
@@ -2984,13 +2995,6 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	if (flags & WQ_MEM_RECLAIM)
 		flags |= WQ_RESCUER;
 
-	/*
-	 * Unbound workqueues aren't concurrency managed and should be
-	 * dispatched to workers immediately.
-	 */
-	if (flags & WQ_UNBOUND)
-		flags |= WQ_HIGHPRI;
-
 	max_active = max_active ?: WQ_DFL_ACTIVE;
 	max_active = wq_clamp_max_active(max_active, flags, wq->name);
 
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 1/6] workqueue: don't use WQ_HIGHPRI for unbound workqueues
@ 2012-07-09 18:41     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Unbound wqs aren't concurrency-managed and try to execute work items
as soon as possible.  This is currently achieved by implicitly setting
%WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
implementation is about to be restructured and this usage won't be
valid anymore.

Add an explicit chain-wakeup path for unbound workqueues in
process_one_work() instead of piggy backing on %WQ_HIGHPRI.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9a3128d..27637c2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -580,6 +580,10 @@ static bool __need_more_worker(struct global_cwq *gcwq)
 /*
  * Need to wake up a worker?  Called from anything but currently
  * running workers.
+ *
+ * Note that, because unbound workers never contribute to nr_running, this
+ * function will always return %true for unbound gcwq as long as the
+ * worklist isn't empty.
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
@@ -1867,6 +1871,13 @@ __acquires(&gcwq->lock)
 	if (unlikely(cpu_intensive))
 		worker_set_flags(worker, WORKER_CPU_INTENSIVE, true);
 
+	/*
+	 * Unbound gcwq isn't concurrency managed and work items should be
+	 * executed ASAP.  Wake up another worker if necessary.
+	 */
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
+		wake_up_worker(gcwq);
+
 	spin_unlock_irq(&gcwq->lock);
 
 	work_clear_pending(work);
@@ -2984,13 +2995,6 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	if (flags & WQ_MEM_RECLAIM)
 		flags |= WQ_RESCUER;
 
-	/*
-	 * Unbound workqueues aren't concurrency managed and should be
-	 * dispatched to workers immediately.
-	 */
-	if (flags & WQ_UNBOUND)
-		flags |= WQ_HIGHPRI;
-
 	max_active = max_active ?: WQ_DFL_ACTIVE;
 	max_active = wq_clamp_max_active(max_active, flags, wq->name);
 
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 1/6] workqueue: don't use WQ_HIGHPRI for unbound workqueues
@ 2012-07-09 18:41     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Unbound wqs aren't concurrency-managed and try to execute work items
as soon as possible.  This is currently achieved by implicitly setting
%WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
implementation is about to be restructured and this usage won't be
valid anymore.

Add an explicit chain-wakeup path for unbound workqueues in
process_one_work() instead of piggy backing on %WQ_HIGHPRI.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |   18 +++++++++++-------
 1 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9a3128d..27637c2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -580,6 +580,10 @@ static bool __need_more_worker(struct global_cwq *gcwq)
 /*
  * Need to wake up a worker?  Called from anything but currently
  * running workers.
+ *
+ * Note that, because unbound workers never contribute to nr_running, this
+ * function will always return %true for unbound gcwq as long as the
+ * worklist isn't empty.
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
@@ -1867,6 +1871,13 @@ __acquires(&gcwq->lock)
 	if (unlikely(cpu_intensive))
 		worker_set_flags(worker, WORKER_CPU_INTENSIVE, true);
 
+	/*
+	 * Unbound gcwq isn't concurrency managed and work items should be
+	 * executed ASAP.  Wake up another worker if necessary.
+	 */
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
+		wake_up_worker(gcwq);
+
 	spin_unlock_irq(&gcwq->lock);
 
 	work_clear_pending(work);
@@ -2984,13 +2995,6 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	if (flags & WQ_MEM_RECLAIM)
 		flags |= WQ_RESCUER;
 
-	/*
-	 * Unbound workqueues aren't concurrency managed and should be
-	 * dispatched to workers immediately.
-	 */
-	if (flags & WQ_UNBOUND)
-		flags |= WQ_HIGHPRI;
-
 	max_active = max_active ?: WQ_DFL_ACTIVE;
 	max_active = wq_clamp_max_active(max_active, flags, wq->name);
 
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..bc43a0c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_head or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..bc43a0c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_head or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..bc43a0c 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_head or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Modify all functions which deal with per-pool properties to pass
around @pool instead of @gcwq or @cpu.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  218 ++++++++++++++++++++++++++-------------------------
 1 files changed, 111 insertions(+), 107 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc43a0c..9f82c25 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -471,8 +471,10 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 		return &unbound_global_cwq;
 }
 
-static atomic_t *get_gcwq_nr_running(unsigned int cpu)
+static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
+	int cpu = pool->gcwq->cpu;
+
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(gcwq_nr_running, cpu);
 	else
@@ -578,10 +580,10 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
  * assume that they're being called with gcwq->lock held.
  */
 
-static bool __need_more_worker(struct global_cwq *gcwq)
+static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_gcwq_nr_running(gcwq->cpu)) ||
-		gcwq->flags & GCWQ_HIGHPRI_PENDING;
+	return !atomic_read(get_pool_nr_running(pool)) ||
+		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
 }
 
 /*
@@ -592,45 +594,46 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  * function will always return %true for unbound gcwq as long as the
  * worklist isn't empty.
  */
-static bool need_more_worker(struct global_cwq *gcwq)
+static bool need_more_worker(struct worker_pool *pool)
 {
-	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
+	return !list_empty(&pool->worklist) && __need_more_worker(pool);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
-static bool may_start_working(struct global_cwq *gcwq)
+static bool may_start_working(struct worker_pool *pool)
 {
-	return gcwq->pool.nr_idle;
+	return pool->nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
-static bool keep_working(struct global_cwq *gcwq)
+static bool keep_working(struct worker_pool *pool)
 {
-	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&gcwq->pool.worklist) &&
+	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
 
 /* Do we need a new worker?  Called from manager. */
-static bool need_to_create_worker(struct global_cwq *gcwq)
+static bool need_to_create_worker(struct worker_pool *pool)
 {
-	return need_more_worker(gcwq) && !may_start_working(gcwq);
+	return need_more_worker(pool) && !may_start_working(pool);
 }
 
 /* Do I need to be the manager? */
-static bool need_to_manage_workers(struct global_cwq *gcwq)
+static bool need_to_manage_workers(struct worker_pool *pool)
 {
-	return need_to_create_worker(gcwq) || gcwq->flags & GCWQ_MANAGE_WORKERS;
+	return need_to_create_worker(pool) ||
+		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
 }
 
 /* Do we have too many workers and should some go away? */
-static bool too_many_workers(struct global_cwq *gcwq)
+static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->pool.nr_workers - nr_idle;
+	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
+	int nr_busy = pool->nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -640,26 +643,26 @@ static bool too_many_workers(struct global_cwq *gcwq)
  */
 
 /* Return the first worker.  Safe with preemption disabled */
-static struct worker *first_worker(struct global_cwq *gcwq)
+static struct worker *first_worker(struct worker_pool *pool)
 {
-	if (unlikely(list_empty(&gcwq->pool.idle_list)))
+	if (unlikely(list_empty(&pool->idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
+	return list_first_entry(&pool->idle_list, struct worker, entry);
 }
 
 /**
  * wake_up_worker - wake up an idle worker
- * @gcwq: gcwq to wake worker for
+ * @pool: worker pool to wake worker from
  *
- * Wake up the first idle worker of @gcwq.
+ * Wake up the first idle worker of @pool.
  *
  * CONTEXT:
  * spin_lock_irq(gcwq->lock).
  */
-static void wake_up_worker(struct global_cwq *gcwq)
+static void wake_up_worker(struct worker_pool *pool)
 {
-	struct worker *worker = first_worker(gcwq);
+	struct worker *worker = first_worker(pool);
 
 	if (likely(worker))
 		wake_up_process(worker->task);
@@ -681,7 +684,7 @@ void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
 	struct worker *worker = kthread_data(task);
 
 	if (!(worker->flags & WORKER_NOT_RUNNING))
-		atomic_inc(get_gcwq_nr_running(cpu));
+		atomic_inc(get_pool_nr_running(worker->pool));
 }
 
 /**
@@ -704,8 +707,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
-	atomic_t *nr_running = get_gcwq_nr_running(cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
 		return NULL;
@@ -725,7 +727,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * without gcwq lock is safe.
 	 */
 	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
-		to_wakeup = first_worker(gcwq);
+		to_wakeup = first_worker(pool);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
 
@@ -746,7 +748,6 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -757,12 +758,12 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 	 */
 	if ((flags & WORKER_NOT_RUNNING) &&
 	    !(worker->flags & WORKER_NOT_RUNNING)) {
-		atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+		atomic_t *nr_running = get_pool_nr_running(pool);
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
 			    !list_empty(&pool->worklist))
-				wake_up_worker(gcwq);
+				wake_up_worker(pool);
 		} else
 			atomic_dec(nr_running);
 	}
@@ -782,7 +783,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -796,7 +797,7 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 	 */
 	if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
 		if (!(worker->flags & WORKER_NOT_RUNNING))
-			atomic_inc(get_gcwq_nr_running(gcwq->cpu));
+			atomic_inc(get_pool_nr_running(pool));
 }
 
 /**
@@ -880,15 +881,15 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * gcwq_determine_ins_pos - find insertion position
- * @gcwq: gcwq of interest
+ * pool_determine_ins_pos - find insertion position
+ * @pool: pool of interest
  * @cwq: cwq a work is being queued for
  *
- * A work for @cwq is about to be queued on @gcwq, determine insertion
+ * A work for @cwq is about to be queued on @pool, determine insertion
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @gcwq that
+ * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -897,22 +898,22 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * RETURNS:
  * Pointer to inserstion position.
  */
-static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
+static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 					       struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->pool.worklist;
+		return &pool->worklist;
 
-	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
+	list_for_each_entry(twork, &pool->worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
 			break;
 	}
 
-	gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -933,7 +934,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->pool->gcwq;
+	struct worker_pool *pool = cwq->pool;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -953,8 +954,8 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 	 */
 	smp_mb();
 
-	if (__need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if (__need_more_worker(pool))
+		wake_up_worker(pool);
 }
 
 /*
@@ -1056,7 +1057,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = gcwq_determine_ins_pos(gcwq, cwq);
+		worklist = pool_determine_ins_pos(cwq->pool, cwq);
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1221,7 +1222,7 @@ static void worker_enter_idle(struct worker *worker)
 	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+		if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
 			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
@@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
 		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+		     atomic_read(get_pool_nr_running(pool)));
 }
 
 /**
@@ -1356,10 +1357,10 @@ static struct worker *alloc_worker(void)
 
 /**
  * create_worker - create a new workqueue worker
- * @gcwq: gcwq the new worker will belong to
+ * @pool: pool the new worker will belong to
  * @bind: whether to set affinity to @cpu or not
  *
- * Create a new worker which is bound to @gcwq.  The returned worker
+ * Create a new worker which is bound to @pool.  The returned worker
  * can be started by calling start_worker() or destroyed using
  * destroy_worker().
  *
@@ -1369,10 +1370,10 @@ static struct worker *alloc_worker(void)
  * RETURNS:
  * Pointer to the newly created worker.
  */
-static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
+	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
-	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1480,27 +1481,27 @@ static void destroy_worker(struct worker *worker)
 	ida_remove(&pool->worker_ida, id);
 }
 
-static void idle_worker_timeout(unsigned long __gcwq)
+static void idle_worker_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (too_many_workers(gcwq)) {
+	if (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		}
 	}
 
@@ -1526,37 +1527,38 @@ static bool send_mayday(struct work_struct *work)
 	return true;
 }
 
-static void gcwq_mayday_timeout(unsigned long __gcwq)
+static void gcwq_mayday_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct work_struct *work;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (need_to_create_worker(gcwq)) {
+	if (need_to_create_worker(pool)) {
 		/*
 		 * We've been trying to create a new worker but
 		 * haven't been successful.  We might be hitting an
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->pool.worklist, entry)
+		list_for_each_entry(work, &pool->worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
  * maybe_create_worker - create a new worker if necessary
- * @gcwq: gcwq to create a new worker for
+ * @pool: pool to create a new worker for
  *
- * Create a new worker for @gcwq if necessary.  @gcwq is guaranteed to
+ * Create a new worker for @pool if necessary.  @pool is guaranteed to
  * have at least one idle worker on return from this function.  If
  * creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
- * sent to all rescuers with works scheduled on @gcwq to resolve
+ * sent to all rescuers with works scheduled on @pool to resolve
  * possible allocation deadlock.
  *
  * On return, need_to_create_worker() is guaranteed to be false and
@@ -1571,52 +1573,54 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_create_worker(struct global_cwq *gcwq)
+static bool maybe_create_worker(struct worker_pool *pool)
 __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
-	if (!need_to_create_worker(gcwq))
+	struct global_cwq *gcwq = pool->gcwq;
+
+	if (!need_to_create_worker(pool))
 		return false;
 restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
-		worker = create_worker(gcwq, true);
+		worker = create_worker(pool, true);
 		if (worker) {
-			del_timer_sync(&gcwq->pool.mayday_timer);
+			del_timer_sync(&pool->mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
-			BUG_ON(need_to_create_worker(gcwq));
+			BUG_ON(need_to_create_worker(pool));
 			return true;
 		}
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 
 		__set_current_state(TASK_INTERRUPTIBLE);
 		schedule_timeout(CREATE_COOLDOWN);
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 	}
 
-	del_timer_sync(&gcwq->pool.mayday_timer);
+	del_timer_sync(&pool->mayday_timer);
 	spin_lock_irq(&gcwq->lock);
-	if (need_to_create_worker(gcwq))
+	if (need_to_create_worker(pool))
 		goto restart;
 	return true;
 }
 
 /**
  * maybe_destroy_worker - destroy workers which have been idle for a while
- * @gcwq: gcwq to destroy workers for
+ * @pool: pool to destroy workers for
  *
- * Destroy @gcwq workers which have been idle for longer than
+ * Destroy @pool workers which have been idle for longer than
  * IDLE_WORKER_TIMEOUT.
  *
  * LOCKING:
@@ -1627,20 +1631,19 @@ restart:
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_destroy_workers(struct global_cwq *gcwq)
+static bool maybe_destroy_workers(struct worker_pool *pool)
 {
 	bool ret = false;
 
-	while (too_many_workers(gcwq)) {
+	while (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 			break;
 		}
 
@@ -1673,7 +1676,8 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1686,8 +1690,8 @@ static bool manage_workers(struct worker *worker)
 	 * Destroy and then create so that may_start_working() is true
 	 * on return.
 	 */
-	ret |= maybe_destroy_workers(gcwq);
-	ret |= maybe_create_worker(gcwq);
+	ret |= maybe_destroy_workers(pool);
+	ret |= maybe_create_worker(pool);
 
 	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
 
@@ -1746,7 +1750,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
+	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1874,7 +1878,7 @@ __acquires(&gcwq->lock)
 
 		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		else
 			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
 	}
@@ -1890,8 +1894,8 @@ __acquires(&gcwq->lock)
 	 * Unbound gcwq isn't concurrency managed and work items should be
 	 * executed ASAP.  Wake up another worker if necessary.
 	 */
-	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(pool))
+		wake_up_worker(pool);
 
 	spin_unlock_irq(&gcwq->lock);
 
@@ -1983,11 +1987,11 @@ woke_up:
 	worker_leave_idle(worker);
 recheck:
 	/* no more worker necessary? */
-	if (!need_more_worker(gcwq))
+	if (!need_more_worker(pool))
 		goto sleep;
 
 	/* do we need to manage? */
-	if (unlikely(!may_start_working(gcwq)) && manage_workers(worker))
+	if (unlikely(!may_start_working(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2018,11 +2022,11 @@ recheck:
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
-	} while (keep_working(gcwq));
+	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP, false);
 sleep:
-	if (unlikely(need_to_manage_workers(gcwq)) && manage_workers(worker))
+	if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2107,8 +2111,8 @@ repeat:
 		 * regular worker; otherwise, we end up with 0 concurrency
 		 * and stalling the execution.
 		 */
-		if (keep_working(gcwq))
-			wake_up_worker(gcwq);
+		if (keep_working(pool))
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3383,7 +3387,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
+	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
 	del_timer_sync(&gcwq->pool.idle_timer);
@@ -3424,9 +3428,9 @@ static int __cpuinit trustee_thread(void *__gcwq)
 			wake_up_process(worker->task);
 		}
 
-		if (need_to_create_worker(gcwq)) {
+		if (need_to_create_worker(&gcwq->pool)) {
 			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(gcwq, false);
+			worker = create_worker(&gcwq->pool, false);
 			spin_lock_irq(&gcwq->lock);
 			if (worker) {
 				worker->flags |= WORKER_ROGUE;
@@ -3540,7 +3544,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		/* fall through */
 	case CPU_UP_PREPARE:
 		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(gcwq, false);
+		new_worker = create_worker(&gcwq->pool, false);
 		if (!new_worker) {
 			if (new_trustee)
 				kthread_stop(new_trustee);
@@ -3788,7 +3792,7 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(gcwq);
+		wake_up_worker(&gcwq->pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3822,10 +3826,10 @@ static int __init init_workqueues(void)
 
 		init_timer_deferrable(&gcwq->pool.idle_timer);
 		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
+		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
 
 		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)gcwq);
+			    (unsigned long)&gcwq->pool);
 
 		ida_init(&gcwq->pool.worker_ida);
 
@@ -3840,7 +3844,7 @@ static int __init init_workqueues(void)
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(gcwq, true);
+		worker = create_worker(&gcwq->pool, true);
 		BUG_ON(!worker);
 		spin_lock_irq(&gcwq->lock);
 		start_worker(worker);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Modify all functions which deal with per-pool properties to pass
around @pool instead of @gcwq or @cpu.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  218 ++++++++++++++++++++++++++-------------------------
 1 files changed, 111 insertions(+), 107 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc43a0c..9f82c25 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -471,8 +471,10 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 		return &unbound_global_cwq;
 }
 
-static atomic_t *get_gcwq_nr_running(unsigned int cpu)
+static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
+	int cpu = pool->gcwq->cpu;
+
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(gcwq_nr_running, cpu);
 	else
@@ -578,10 +580,10 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
  * assume that they're being called with gcwq->lock held.
  */
 
-static bool __need_more_worker(struct global_cwq *gcwq)
+static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_gcwq_nr_running(gcwq->cpu)) ||
-		gcwq->flags & GCWQ_HIGHPRI_PENDING;
+	return !atomic_read(get_pool_nr_running(pool)) ||
+		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
 }
 
 /*
@@ -592,45 +594,46 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  * function will always return %true for unbound gcwq as long as the
  * worklist isn't empty.
  */
-static bool need_more_worker(struct global_cwq *gcwq)
+static bool need_more_worker(struct worker_pool *pool)
 {
-	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
+	return !list_empty(&pool->worklist) && __need_more_worker(pool);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
-static bool may_start_working(struct global_cwq *gcwq)
+static bool may_start_working(struct worker_pool *pool)
 {
-	return gcwq->pool.nr_idle;
+	return pool->nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
-static bool keep_working(struct global_cwq *gcwq)
+static bool keep_working(struct worker_pool *pool)
 {
-	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&gcwq->pool.worklist) &&
+	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
 
 /* Do we need a new worker?  Called from manager. */
-static bool need_to_create_worker(struct global_cwq *gcwq)
+static bool need_to_create_worker(struct worker_pool *pool)
 {
-	return need_more_worker(gcwq) && !may_start_working(gcwq);
+	return need_more_worker(pool) && !may_start_working(pool);
 }
 
 /* Do I need to be the manager? */
-static bool need_to_manage_workers(struct global_cwq *gcwq)
+static bool need_to_manage_workers(struct worker_pool *pool)
 {
-	return need_to_create_worker(gcwq) || gcwq->flags & GCWQ_MANAGE_WORKERS;
+	return need_to_create_worker(pool) ||
+		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
 }
 
 /* Do we have too many workers and should some go away? */
-static bool too_many_workers(struct global_cwq *gcwq)
+static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->pool.nr_workers - nr_idle;
+	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
+	int nr_busy = pool->nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -640,26 +643,26 @@ static bool too_many_workers(struct global_cwq *gcwq)
  */
 
 /* Return the first worker.  Safe with preemption disabled */
-static struct worker *first_worker(struct global_cwq *gcwq)
+static struct worker *first_worker(struct worker_pool *pool)
 {
-	if (unlikely(list_empty(&gcwq->pool.idle_list)))
+	if (unlikely(list_empty(&pool->idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
+	return list_first_entry(&pool->idle_list, struct worker, entry);
 }
 
 /**
  * wake_up_worker - wake up an idle worker
- * @gcwq: gcwq to wake worker for
+ * @pool: worker pool to wake worker from
  *
- * Wake up the first idle worker of @gcwq.
+ * Wake up the first idle worker of @pool.
  *
  * CONTEXT:
  * spin_lock_irq(gcwq->lock).
  */
-static void wake_up_worker(struct global_cwq *gcwq)
+static void wake_up_worker(struct worker_pool *pool)
 {
-	struct worker *worker = first_worker(gcwq);
+	struct worker *worker = first_worker(pool);
 
 	if (likely(worker))
 		wake_up_process(worker->task);
@@ -681,7 +684,7 @@ void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
 	struct worker *worker = kthread_data(task);
 
 	if (!(worker->flags & WORKER_NOT_RUNNING))
-		atomic_inc(get_gcwq_nr_running(cpu));
+		atomic_inc(get_pool_nr_running(worker->pool));
 }
 
 /**
@@ -704,8 +707,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
-	atomic_t *nr_running = get_gcwq_nr_running(cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
 		return NULL;
@@ -725,7 +727,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * without gcwq lock is safe.
 	 */
 	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
-		to_wakeup = first_worker(gcwq);
+		to_wakeup = first_worker(pool);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
 
@@ -746,7 +748,6 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -757,12 +758,12 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 	 */
 	if ((flags & WORKER_NOT_RUNNING) &&
 	    !(worker->flags & WORKER_NOT_RUNNING)) {
-		atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+		atomic_t *nr_running = get_pool_nr_running(pool);
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
 			    !list_empty(&pool->worklist))
-				wake_up_worker(gcwq);
+				wake_up_worker(pool);
 		} else
 			atomic_dec(nr_running);
 	}
@@ -782,7 +783,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -796,7 +797,7 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 	 */
 	if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
 		if (!(worker->flags & WORKER_NOT_RUNNING))
-			atomic_inc(get_gcwq_nr_running(gcwq->cpu));
+			atomic_inc(get_pool_nr_running(pool));
 }
 
 /**
@@ -880,15 +881,15 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * gcwq_determine_ins_pos - find insertion position
- * @gcwq: gcwq of interest
+ * pool_determine_ins_pos - find insertion position
+ * @pool: pool of interest
  * @cwq: cwq a work is being queued for
  *
- * A work for @cwq is about to be queued on @gcwq, determine insertion
+ * A work for @cwq is about to be queued on @pool, determine insertion
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @gcwq that
+ * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -897,22 +898,22 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * RETURNS:
  * Pointer to inserstion position.
  */
-static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
+static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 					       struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->pool.worklist;
+		return &pool->worklist;
 
-	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
+	list_for_each_entry(twork, &pool->worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
 			break;
 	}
 
-	gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -933,7 +934,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->pool->gcwq;
+	struct worker_pool *pool = cwq->pool;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -953,8 +954,8 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 	 */
 	smp_mb();
 
-	if (__need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if (__need_more_worker(pool))
+		wake_up_worker(pool);
 }
 
 /*
@@ -1056,7 +1057,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = gcwq_determine_ins_pos(gcwq, cwq);
+		worklist = pool_determine_ins_pos(cwq->pool, cwq);
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1221,7 +1222,7 @@ static void worker_enter_idle(struct worker *worker)
 	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+		if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
 			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
@@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
 		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+		     atomic_read(get_pool_nr_running(pool)));
 }
 
 /**
@@ -1356,10 +1357,10 @@ static struct worker *alloc_worker(void)
 
 /**
  * create_worker - create a new workqueue worker
- * @gcwq: gcwq the new worker will belong to
+ * @pool: pool the new worker will belong to
  * @bind: whether to set affinity to @cpu or not
  *
- * Create a new worker which is bound to @gcwq.  The returned worker
+ * Create a new worker which is bound to @pool.  The returned worker
  * can be started by calling start_worker() or destroyed using
  * destroy_worker().
  *
@@ -1369,10 +1370,10 @@ static struct worker *alloc_worker(void)
  * RETURNS:
  * Pointer to the newly created worker.
  */
-static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
+	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
-	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1480,27 +1481,27 @@ static void destroy_worker(struct worker *worker)
 	ida_remove(&pool->worker_ida, id);
 }
 
-static void idle_worker_timeout(unsigned long __gcwq)
+static void idle_worker_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (too_many_workers(gcwq)) {
+	if (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		}
 	}
 
@@ -1526,37 +1527,38 @@ static bool send_mayday(struct work_struct *work)
 	return true;
 }
 
-static void gcwq_mayday_timeout(unsigned long __gcwq)
+static void gcwq_mayday_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct work_struct *work;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (need_to_create_worker(gcwq)) {
+	if (need_to_create_worker(pool)) {
 		/*
 		 * We've been trying to create a new worker but
 		 * haven't been successful.  We might be hitting an
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->pool.worklist, entry)
+		list_for_each_entry(work, &pool->worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
  * maybe_create_worker - create a new worker if necessary
- * @gcwq: gcwq to create a new worker for
+ * @pool: pool to create a new worker for
  *
- * Create a new worker for @gcwq if necessary.  @gcwq is guaranteed to
+ * Create a new worker for @pool if necessary.  @pool is guaranteed to
  * have at least one idle worker on return from this function.  If
  * creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
- * sent to all rescuers with works scheduled on @gcwq to resolve
+ * sent to all rescuers with works scheduled on @pool to resolve
  * possible allocation deadlock.
  *
  * On return, need_to_create_worker() is guaranteed to be false and
@@ -1571,52 +1573,54 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_create_worker(struct global_cwq *gcwq)
+static bool maybe_create_worker(struct worker_pool *pool)
 __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
-	if (!need_to_create_worker(gcwq))
+	struct global_cwq *gcwq = pool->gcwq;
+
+	if (!need_to_create_worker(pool))
 		return false;
 restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
-		worker = create_worker(gcwq, true);
+		worker = create_worker(pool, true);
 		if (worker) {
-			del_timer_sync(&gcwq->pool.mayday_timer);
+			del_timer_sync(&pool->mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
-			BUG_ON(need_to_create_worker(gcwq));
+			BUG_ON(need_to_create_worker(pool));
 			return true;
 		}
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 
 		__set_current_state(TASK_INTERRUPTIBLE);
 		schedule_timeout(CREATE_COOLDOWN);
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 	}
 
-	del_timer_sync(&gcwq->pool.mayday_timer);
+	del_timer_sync(&pool->mayday_timer);
 	spin_lock_irq(&gcwq->lock);
-	if (need_to_create_worker(gcwq))
+	if (need_to_create_worker(pool))
 		goto restart;
 	return true;
 }
 
 /**
  * maybe_destroy_worker - destroy workers which have been idle for a while
- * @gcwq: gcwq to destroy workers for
+ * @pool: pool to destroy workers for
  *
- * Destroy @gcwq workers which have been idle for longer than
+ * Destroy @pool workers which have been idle for longer than
  * IDLE_WORKER_TIMEOUT.
  *
  * LOCKING:
@@ -1627,20 +1631,19 @@ restart:
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_destroy_workers(struct global_cwq *gcwq)
+static bool maybe_destroy_workers(struct worker_pool *pool)
 {
 	bool ret = false;
 
-	while (too_many_workers(gcwq)) {
+	while (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 			break;
 		}
 
@@ -1673,7 +1676,8 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1686,8 +1690,8 @@ static bool manage_workers(struct worker *worker)
 	 * Destroy and then create so that may_start_working() is true
 	 * on return.
 	 */
-	ret |= maybe_destroy_workers(gcwq);
-	ret |= maybe_create_worker(gcwq);
+	ret |= maybe_destroy_workers(pool);
+	ret |= maybe_create_worker(pool);
 
 	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
 
@@ -1746,7 +1750,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
+	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1874,7 +1878,7 @@ __acquires(&gcwq->lock)
 
 		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		else
 			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
 	}
@@ -1890,8 +1894,8 @@ __acquires(&gcwq->lock)
 	 * Unbound gcwq isn't concurrency managed and work items should be
 	 * executed ASAP.  Wake up another worker if necessary.
 	 */
-	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(pool))
+		wake_up_worker(pool);
 
 	spin_unlock_irq(&gcwq->lock);
 
@@ -1983,11 +1987,11 @@ woke_up:
 	worker_leave_idle(worker);
 recheck:
 	/* no more worker necessary? */
-	if (!need_more_worker(gcwq))
+	if (!need_more_worker(pool))
 		goto sleep;
 
 	/* do we need to manage? */
-	if (unlikely(!may_start_working(gcwq)) && manage_workers(worker))
+	if (unlikely(!may_start_working(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2018,11 +2022,11 @@ recheck:
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
-	} while (keep_working(gcwq));
+	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP, false);
 sleep:
-	if (unlikely(need_to_manage_workers(gcwq)) && manage_workers(worker))
+	if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2107,8 +2111,8 @@ repeat:
 		 * regular worker; otherwise, we end up with 0 concurrency
 		 * and stalling the execution.
 		 */
-		if (keep_working(gcwq))
-			wake_up_worker(gcwq);
+		if (keep_working(pool))
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3383,7 +3387,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
+	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
 	del_timer_sync(&gcwq->pool.idle_timer);
@@ -3424,9 +3428,9 @@ static int __cpuinit trustee_thread(void *__gcwq)
 			wake_up_process(worker->task);
 		}
 
-		if (need_to_create_worker(gcwq)) {
+		if (need_to_create_worker(&gcwq->pool)) {
 			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(gcwq, false);
+			worker = create_worker(&gcwq->pool, false);
 			spin_lock_irq(&gcwq->lock);
 			if (worker) {
 				worker->flags |= WORKER_ROGUE;
@@ -3540,7 +3544,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		/* fall through */
 	case CPU_UP_PREPARE:
 		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(gcwq, false);
+		new_worker = create_worker(&gcwq->pool, false);
 		if (!new_worker) {
 			if (new_trustee)
 				kthread_stop(new_trustee);
@@ -3788,7 +3792,7 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(gcwq);
+		wake_up_worker(&gcwq->pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3822,10 +3826,10 @@ static int __init init_workqueues(void)
 
 		init_timer_deferrable(&gcwq->pool.idle_timer);
 		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
+		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
 
 		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)gcwq);
+			    (unsigned long)&gcwq->pool);
 
 		ida_init(&gcwq->pool.worker_ida);
 
@@ -3840,7 +3844,7 @@ static int __init init_workqueues(void)
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(gcwq, true);
+		worker = create_worker(&gcwq->pool, true);
 		BUG_ON(!worker);
 		spin_lock_irq(&gcwq->lock);
 		start_worker(worker);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Modify all functions which deal with per-pool properties to pass
around @pool instead of @gcwq or @cpu.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  218 ++++++++++++++++++++++++++-------------------------
 1 files changed, 111 insertions(+), 107 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index bc43a0c..9f82c25 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -471,8 +471,10 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 		return &unbound_global_cwq;
 }
 
-static atomic_t *get_gcwq_nr_running(unsigned int cpu)
+static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
+	int cpu = pool->gcwq->cpu;
+
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(gcwq_nr_running, cpu);
 	else
@@ -578,10 +580,10 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
  * assume that they're being called with gcwq->lock held.
  */
 
-static bool __need_more_worker(struct global_cwq *gcwq)
+static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_gcwq_nr_running(gcwq->cpu)) ||
-		gcwq->flags & GCWQ_HIGHPRI_PENDING;
+	return !atomic_read(get_pool_nr_running(pool)) ||
+		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
 }
 
 /*
@@ -592,45 +594,46 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  * function will always return %true for unbound gcwq as long as the
  * worklist isn't empty.
  */
-static bool need_more_worker(struct global_cwq *gcwq)
+static bool need_more_worker(struct worker_pool *pool)
 {
-	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
+	return !list_empty(&pool->worklist) && __need_more_worker(pool);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
-static bool may_start_working(struct global_cwq *gcwq)
+static bool may_start_working(struct worker_pool *pool)
 {
-	return gcwq->pool.nr_idle;
+	return pool->nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
-static bool keep_working(struct global_cwq *gcwq)
+static bool keep_working(struct worker_pool *pool)
 {
-	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&gcwq->pool.worklist) &&
+	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
 
 /* Do we need a new worker?  Called from manager. */
-static bool need_to_create_worker(struct global_cwq *gcwq)
+static bool need_to_create_worker(struct worker_pool *pool)
 {
-	return need_more_worker(gcwq) && !may_start_working(gcwq);
+	return need_more_worker(pool) && !may_start_working(pool);
 }
 
 /* Do I need to be the manager? */
-static bool need_to_manage_workers(struct global_cwq *gcwq)
+static bool need_to_manage_workers(struct worker_pool *pool)
 {
-	return need_to_create_worker(gcwq) || gcwq->flags & GCWQ_MANAGE_WORKERS;
+	return need_to_create_worker(pool) ||
+		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
 }
 
 /* Do we have too many workers and should some go away? */
-static bool too_many_workers(struct global_cwq *gcwq)
+static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->pool.nr_workers - nr_idle;
+	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
+	int nr_busy = pool->nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -640,26 +643,26 @@ static bool too_many_workers(struct global_cwq *gcwq)
  */
 
 /* Return the first worker.  Safe with preemption disabled */
-static struct worker *first_worker(struct global_cwq *gcwq)
+static struct worker *first_worker(struct worker_pool *pool)
 {
-	if (unlikely(list_empty(&gcwq->pool.idle_list)))
+	if (unlikely(list_empty(&pool->idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
+	return list_first_entry(&pool->idle_list, struct worker, entry);
 }
 
 /**
  * wake_up_worker - wake up an idle worker
- * @gcwq: gcwq to wake worker for
+ * @pool: worker pool to wake worker from
  *
- * Wake up the first idle worker of @gcwq.
+ * Wake up the first idle worker of @pool.
  *
  * CONTEXT:
  * spin_lock_irq(gcwq->lock).
  */
-static void wake_up_worker(struct global_cwq *gcwq)
+static void wake_up_worker(struct worker_pool *pool)
 {
-	struct worker *worker = first_worker(gcwq);
+	struct worker *worker = first_worker(pool);
 
 	if (likely(worker))
 		wake_up_process(worker->task);
@@ -681,7 +684,7 @@ void wq_worker_waking_up(struct task_struct *task, unsigned int cpu)
 	struct worker *worker = kthread_data(task);
 
 	if (!(worker->flags & WORKER_NOT_RUNNING))
-		atomic_inc(get_gcwq_nr_running(cpu));
+		atomic_inc(get_pool_nr_running(worker->pool));
 }
 
 /**
@@ -704,8 +707,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
-	atomic_t *nr_running = get_gcwq_nr_running(cpu);
+	atomic_t *nr_running = get_pool_nr_running(pool);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
 		return NULL;
@@ -725,7 +727,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * without gcwq lock is safe.
 	 */
 	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
-		to_wakeup = first_worker(gcwq);
+		to_wakeup = first_worker(pool);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
 
@@ -746,7 +748,6 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
 	struct worker_pool *pool = worker->pool;
-	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -757,12 +758,12 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 	 */
 	if ((flags & WORKER_NOT_RUNNING) &&
 	    !(worker->flags & WORKER_NOT_RUNNING)) {
-		atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+		atomic_t *nr_running = get_pool_nr_running(pool);
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
 			    !list_empty(&pool->worklist))
-				wake_up_worker(gcwq);
+				wake_up_worker(pool);
 		} else
 			atomic_dec(nr_running);
 	}
@@ -782,7 +783,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -796,7 +797,7 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 	 */
 	if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
 		if (!(worker->flags & WORKER_NOT_RUNNING))
-			atomic_inc(get_gcwq_nr_running(gcwq->cpu));
+			atomic_inc(get_pool_nr_running(pool));
 }
 
 /**
@@ -880,15 +881,15 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * gcwq_determine_ins_pos - find insertion position
- * @gcwq: gcwq of interest
+ * pool_determine_ins_pos - find insertion position
+ * @pool: pool of interest
  * @cwq: cwq a work is being queued for
  *
- * A work for @cwq is about to be queued on @gcwq, determine insertion
+ * A work for @cwq is about to be queued on @pool, determine insertion
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @gcwq that
+ * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -897,22 +898,22 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * RETURNS:
  * Pointer to inserstion position.
  */
-static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
+static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 					       struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->pool.worklist;
+		return &pool->worklist;
 
-	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
+	list_for_each_entry(twork, &pool->worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
 			break;
 	}
 
-	gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -933,7 +934,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->pool->gcwq;
+	struct worker_pool *pool = cwq->pool;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -953,8 +954,8 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 	 */
 	smp_mb();
 
-	if (__need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if (__need_more_worker(pool))
+		wake_up_worker(pool);
 }
 
 /*
@@ -1056,7 +1057,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = gcwq_determine_ins_pos(gcwq, cwq);
+		worklist = pool_determine_ins_pos(cwq->pool, cwq);
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1221,7 +1222,7 @@ static void worker_enter_idle(struct worker *worker)
 	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+		if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
 			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
@@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
 		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+		     atomic_read(get_pool_nr_running(pool)));
 }
 
 /**
@@ -1356,10 +1357,10 @@ static struct worker *alloc_worker(void)
 
 /**
  * create_worker - create a new workqueue worker
- * @gcwq: gcwq the new worker will belong to
+ * @pool: pool the new worker will belong to
  * @bind: whether to set affinity to @cpu or not
  *
- * Create a new worker which is bound to @gcwq.  The returned worker
+ * Create a new worker which is bound to @pool.  The returned worker
  * can be started by calling start_worker() or destroyed using
  * destroy_worker().
  *
@@ -1369,10 +1370,10 @@ static struct worker *alloc_worker(void)
  * RETURNS:
  * Pointer to the newly created worker.
  */
-static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
+	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
-	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1480,27 +1481,27 @@ static void destroy_worker(struct worker *worker)
 	ida_remove(&pool->worker_ida, id);
 }
 
-static void idle_worker_timeout(unsigned long __gcwq)
+static void idle_worker_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (too_many_workers(gcwq)) {
+	if (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		}
 	}
 
@@ -1526,37 +1527,38 @@ static bool send_mayday(struct work_struct *work)
 	return true;
 }
 
-static void gcwq_mayday_timeout(unsigned long __gcwq)
+static void gcwq_mayday_timeout(unsigned long __pool)
 {
-	struct global_cwq *gcwq = (void *)__gcwq;
+	struct worker_pool *pool = (void *)__pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct work_struct *work;
 
 	spin_lock_irq(&gcwq->lock);
 
-	if (need_to_create_worker(gcwq)) {
+	if (need_to_create_worker(pool)) {
 		/*
 		 * We've been trying to create a new worker but
 		 * haven't been successful.  We might be hitting an
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->pool.worklist, entry)
+		list_for_each_entry(work, &pool->worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
  * maybe_create_worker - create a new worker if necessary
- * @gcwq: gcwq to create a new worker for
+ * @pool: pool to create a new worker for
  *
- * Create a new worker for @gcwq if necessary.  @gcwq is guaranteed to
+ * Create a new worker for @pool if necessary.  @pool is guaranteed to
  * have at least one idle worker on return from this function.  If
  * creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
- * sent to all rescuers with works scheduled on @gcwq to resolve
+ * sent to all rescuers with works scheduled on @pool to resolve
  * possible allocation deadlock.
  *
  * On return, need_to_create_worker() is guaranteed to be false and
@@ -1571,52 +1573,54 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_create_worker(struct global_cwq *gcwq)
+static bool maybe_create_worker(struct worker_pool *pool)
 __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
-	if (!need_to_create_worker(gcwq))
+	struct global_cwq *gcwq = pool->gcwq;
+
+	if (!need_to_create_worker(pool))
 		return false;
 restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
-		worker = create_worker(gcwq, true);
+		worker = create_worker(pool, true);
 		if (worker) {
-			del_timer_sync(&gcwq->pool.mayday_timer);
+			del_timer_sync(&pool->mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
-			BUG_ON(need_to_create_worker(gcwq));
+			BUG_ON(need_to_create_worker(pool));
 			return true;
 		}
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 
 		__set_current_state(TASK_INTERRUPTIBLE);
 		schedule_timeout(CREATE_COOLDOWN);
 
-		if (!need_to_create_worker(gcwq))
+		if (!need_to_create_worker(pool))
 			break;
 	}
 
-	del_timer_sync(&gcwq->pool.mayday_timer);
+	del_timer_sync(&pool->mayday_timer);
 	spin_lock_irq(&gcwq->lock);
-	if (need_to_create_worker(gcwq))
+	if (need_to_create_worker(pool))
 		goto restart;
 	return true;
 }
 
 /**
  * maybe_destroy_worker - destroy workers which have been idle for a while
- * @gcwq: gcwq to destroy workers for
+ * @pool: pool to destroy workers for
  *
- * Destroy @gcwq workers which have been idle for longer than
+ * Destroy @pool workers which have been idle for longer than
  * IDLE_WORKER_TIMEOUT.
  *
  * LOCKING:
@@ -1627,20 +1631,19 @@ restart:
  * false if no action was taken and gcwq->lock stayed locked, true
  * otherwise.
  */
-static bool maybe_destroy_workers(struct global_cwq *gcwq)
+static bool maybe_destroy_workers(struct worker_pool *pool)
 {
 	bool ret = false;
 
-	while (too_many_workers(gcwq)) {
+	while (too_many_workers(pool)) {
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
-				    entry);
+		worker = list_entry(pool->idle_list.prev, struct worker, entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->pool.idle_timer, expires);
+			mod_timer(&pool->idle_timer, expires);
 			break;
 		}
 
@@ -1673,7 +1676,8 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->pool->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1686,8 +1690,8 @@ static bool manage_workers(struct worker *worker)
 	 * Destroy and then create so that may_start_working() is true
 	 * on return.
 	 */
-	ret |= maybe_destroy_workers(gcwq);
-	ret |= maybe_create_worker(gcwq);
+	ret |= maybe_destroy_workers(pool);
+	ret |= maybe_create_worker(pool);
 
 	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
 
@@ -1746,7 +1750,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
+	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1874,7 +1878,7 @@ __acquires(&gcwq->lock)
 
 		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(gcwq);
+			wake_up_worker(pool);
 		else
 			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
 	}
@@ -1890,8 +1894,8 @@ __acquires(&gcwq->lock)
 	 * Unbound gcwq isn't concurrency managed and work items should be
 	 * executed ASAP.  Wake up another worker if necessary.
 	 */
-	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(gcwq))
-		wake_up_worker(gcwq);
+	if ((worker->flags & WORKER_UNBOUND) && need_more_worker(pool))
+		wake_up_worker(pool);
 
 	spin_unlock_irq(&gcwq->lock);
 
@@ -1983,11 +1987,11 @@ woke_up:
 	worker_leave_idle(worker);
 recheck:
 	/* no more worker necessary? */
-	if (!need_more_worker(gcwq))
+	if (!need_more_worker(pool))
 		goto sleep;
 
 	/* do we need to manage? */
-	if (unlikely(!may_start_working(gcwq)) && manage_workers(worker))
+	if (unlikely(!may_start_working(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2018,11 +2022,11 @@ recheck:
 			move_linked_works(work, &worker->scheduled, NULL);
 			process_scheduled_works(worker);
 		}
-	} while (keep_working(gcwq));
+	} while (keep_working(pool));
 
 	worker_set_flags(worker, WORKER_PREP, false);
 sleep:
-	if (unlikely(need_to_manage_workers(gcwq)) && manage_workers(worker))
+	if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
 		goto recheck;
 
 	/*
@@ -2107,8 +2111,8 @@ repeat:
 		 * regular worker; otherwise, we end up with 0 concurrency
 		 * and stalling the execution.
 		 */
-		if (keep_working(gcwq))
-			wake_up_worker(gcwq);
+		if (keep_working(pool))
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3383,7 +3387,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
+	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
 	del_timer_sync(&gcwq->pool.idle_timer);
@@ -3424,9 +3428,9 @@ static int __cpuinit trustee_thread(void *__gcwq)
 			wake_up_process(worker->task);
 		}
 
-		if (need_to_create_worker(gcwq)) {
+		if (need_to_create_worker(&gcwq->pool)) {
 			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(gcwq, false);
+			worker = create_worker(&gcwq->pool, false);
 			spin_lock_irq(&gcwq->lock);
 			if (worker) {
 				worker->flags |= WORKER_ROGUE;
@@ -3540,7 +3544,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		/* fall through */
 	case CPU_UP_PREPARE:
 		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(gcwq, false);
+		new_worker = create_worker(&gcwq->pool, false);
 		if (!new_worker) {
 			if (new_trustee)
 				kthread_stop(new_trustee);
@@ -3788,7 +3792,7 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(gcwq);
+		wake_up_worker(&gcwq->pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3822,10 +3826,10 @@ static int __init init_workqueues(void)
 
 		init_timer_deferrable(&gcwq->pool.idle_timer);
 		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
+		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
 
 		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)gcwq);
+			    (unsigned long)&gcwq->pool);
 
 		ida_init(&gcwq->pool.worker_ida);
 
@@ -3840,7 +3844,7 @@ static int __init init_workqueues(void)
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(gcwq, true);
+		worker = create_worker(&gcwq->pool, true);
 		BUG_ON(!worker);
 		spin_lock_irq(&gcwq->lock);
 		start_worker(worker);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 4/6] workqueue: separate out worker_pool flags
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
are per-pool properties.  Add worker_pool->flags and make the above
three flags per-pool flags.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |   47 +++++++++++++++++++++++++----------------------
 1 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9f82c25..e700dcc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -46,11 +46,13 @@
 
 enum {
 	/* global_cwq flags */
-	GCWQ_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
-	GCWQ_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	GCWQ_DISASSOCIATED	= 1 << 2,	/* cpu can't serve workers */
-	GCWQ_FREEZING		= 1 << 3,	/* freeze in progress */
-	GCWQ_HIGHPRI_PENDING	= 1 << 4,	/* highpri works on queue */
+	GCWQ_DISASSOCIATED	= 1 << 0,	/* cpu can't serve workers */
+	GCWQ_FREEZING		= 1 << 1,	/* freeze in progress */
+
+	/* pool flags */
+	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
+	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
+	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -142,6 +144,7 @@ struct worker {
 
 struct worker_pool {
 	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+	unsigned int		flags;		/* X: flags */
 
 	struct list_head	worklist;	/* L: list of pending works */
 	int			nr_workers;	/* L: total number of workers */
@@ -583,7 +586,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 static bool __need_more_worker(struct worker_pool *pool)
 {
 	return !atomic_read(get_pool_nr_running(pool)) ||
-		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
+		(pool->flags & POOL_HIGHPRI_PENDING);
 }
 
 /*
@@ -612,7 +615,7 @@ static bool keep_working(struct worker_pool *pool)
 
 	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 (pool->flags & POOL_HIGHPRI_PENDING));
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -625,13 +628,13 @@ static bool need_to_create_worker(struct worker_pool *pool)
 static bool need_to_manage_workers(struct worker_pool *pool)
 {
 	return need_to_create_worker(pool) ||
-		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
+		(pool->flags & POOL_MANAGE_WORKERS);
 }
 
 /* Do we have too many workers and should some go away? */
 static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	bool managing = pool->flags & POOL_MANAGING_WORKERS;
 	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
 	int nr_busy = pool->nr_workers - nr_idle;
 
@@ -889,7 +892,7 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
+ * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -913,7 +916,7 @@ static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 			break;
 	}
 
-	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->flags |= POOL_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -1500,7 +1503,7 @@ static void idle_worker_timeout(unsigned long __pool)
 			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
-			gcwq->flags |= GCWQ_MANAGE_WORKERS;
+			pool->flags |= POOL_MANAGE_WORKERS;
 			wake_up_worker(pool);
 		}
 	}
@@ -1680,11 +1683,11 @@ static bool manage_workers(struct worker *worker)
 	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
-	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
+	if (pool->flags & POOL_MANAGING_WORKERS)
 		return ret;
 
-	gcwq->flags &= ~GCWQ_MANAGE_WORKERS;
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGE_WORKERS;
+	pool->flags |= POOL_MANAGING_WORKERS;
 
 	/*
 	 * Destroy and then create so that may_start_working() is true
@@ -1693,7 +1696,7 @@ static bool manage_workers(struct worker *worker)
 	ret |= maybe_destroy_workers(pool);
 	ret |= maybe_create_worker(pool);
 
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/*
 	 * The trustee might be waiting to take over the manager
@@ -1872,7 +1875,7 @@ __acquires(&gcwq->lock)
 	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
-	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
+	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
 		struct work_struct *nwork = list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
@@ -1880,7 +1883,7 @@ __acquires(&gcwq->lock)
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(pool);
 		else
-			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
+			pool->flags &= ~POOL_HIGHPRI_PENDING;
 	}
 
 	/*
@@ -3360,10 +3363,10 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
 	BUG_ON(rc < 0);
 
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
 
 	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
@@ -3487,7 +3490,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3604,7 +3607,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		spin_unlock_irq(&gcwq->lock);
 		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
-		gcwq->flags |= GCWQ_MANAGE_WORKERS;
+		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
 		start_worker(gcwq->pool.first_idle);
 		gcwq->pool.first_idle = NULL;
 		break;
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 4/6] workqueue: separate out worker_pool flags
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
are per-pool properties.  Add worker_pool->flags and make the above
three flags per-pool flags.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |   47 +++++++++++++++++++++++++----------------------
 1 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9f82c25..e700dcc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -46,11 +46,13 @@
 
 enum {
 	/* global_cwq flags */
-	GCWQ_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
-	GCWQ_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	GCWQ_DISASSOCIATED	= 1 << 2,	/* cpu can't serve workers */
-	GCWQ_FREEZING		= 1 << 3,	/* freeze in progress */
-	GCWQ_HIGHPRI_PENDING	= 1 << 4,	/* highpri works on queue */
+	GCWQ_DISASSOCIATED	= 1 << 0,	/* cpu can't serve workers */
+	GCWQ_FREEZING		= 1 << 1,	/* freeze in progress */
+
+	/* pool flags */
+	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
+	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
+	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -142,6 +144,7 @@ struct worker {
 
 struct worker_pool {
 	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+	unsigned int		flags;		/* X: flags */
 
 	struct list_head	worklist;	/* L: list of pending works */
 	int			nr_workers;	/* L: total number of workers */
@@ -583,7 +586,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 static bool __need_more_worker(struct worker_pool *pool)
 {
 	return !atomic_read(get_pool_nr_running(pool)) ||
-		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
+		(pool->flags & POOL_HIGHPRI_PENDING);
 }
 
 /*
@@ -612,7 +615,7 @@ static bool keep_working(struct worker_pool *pool)
 
 	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 (pool->flags & POOL_HIGHPRI_PENDING));
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -625,13 +628,13 @@ static bool need_to_create_worker(struct worker_pool *pool)
 static bool need_to_manage_workers(struct worker_pool *pool)
 {
 	return need_to_create_worker(pool) ||
-		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
+		(pool->flags & POOL_MANAGE_WORKERS);
 }
 
 /* Do we have too many workers and should some go away? */
 static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	bool managing = pool->flags & POOL_MANAGING_WORKERS;
 	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
 	int nr_busy = pool->nr_workers - nr_idle;
 
@@ -889,7 +892,7 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
+ * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -913,7 +916,7 @@ static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 			break;
 	}
 
-	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->flags |= POOL_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -1500,7 +1503,7 @@ static void idle_worker_timeout(unsigned long __pool)
 			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
-			gcwq->flags |= GCWQ_MANAGE_WORKERS;
+			pool->flags |= POOL_MANAGE_WORKERS;
 			wake_up_worker(pool);
 		}
 	}
@@ -1680,11 +1683,11 @@ static bool manage_workers(struct worker *worker)
 	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
-	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
+	if (pool->flags & POOL_MANAGING_WORKERS)
 		return ret;
 
-	gcwq->flags &= ~GCWQ_MANAGE_WORKERS;
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGE_WORKERS;
+	pool->flags |= POOL_MANAGING_WORKERS;
 
 	/*
 	 * Destroy and then create so that may_start_working() is true
@@ -1693,7 +1696,7 @@ static bool manage_workers(struct worker *worker)
 	ret |= maybe_destroy_workers(pool);
 	ret |= maybe_create_worker(pool);
 
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/*
 	 * The trustee might be waiting to take over the manager
@@ -1872,7 +1875,7 @@ __acquires(&gcwq->lock)
 	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
-	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
+	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
 		struct work_struct *nwork = list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
@@ -1880,7 +1883,7 @@ __acquires(&gcwq->lock)
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(pool);
 		else
-			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
+			pool->flags &= ~POOL_HIGHPRI_PENDING;
 	}
 
 	/*
@@ -3360,10 +3363,10 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
 	BUG_ON(rc < 0);
 
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
 
 	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
@@ -3487,7 +3490,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3604,7 +3607,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		spin_unlock_irq(&gcwq->lock);
 		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
-		gcwq->flags |= GCWQ_MANAGE_WORKERS;
+		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
 		start_worker(gcwq->pool.first_idle);
 		gcwq->pool.first_idle = NULL;
 		break;
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 4/6] workqueue: separate out worker_pool flags
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

GCWQ_MANAGE_WORKERS, GCWQ_MANAGING_WORKERS and GCWQ_HIGHPRI_PENDING
are per-pool properties.  Add worker_pool->flags and make the above
three flags per-pool flags.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |   47 +++++++++++++++++++++++++----------------------
 1 files changed, 25 insertions(+), 22 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9f82c25..e700dcc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -46,11 +46,13 @@
 
 enum {
 	/* global_cwq flags */
-	GCWQ_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
-	GCWQ_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	GCWQ_DISASSOCIATED	= 1 << 2,	/* cpu can't serve workers */
-	GCWQ_FREEZING		= 1 << 3,	/* freeze in progress */
-	GCWQ_HIGHPRI_PENDING	= 1 << 4,	/* highpri works on queue */
+	GCWQ_DISASSOCIATED	= 1 << 0,	/* cpu can't serve workers */
+	GCWQ_FREEZING		= 1 << 1,	/* freeze in progress */
+
+	/* pool flags */
+	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
+	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
+	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -142,6 +144,7 @@ struct worker {
 
 struct worker_pool {
 	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+	unsigned int		flags;		/* X: flags */
 
 	struct list_head	worklist;	/* L: list of pending works */
 	int			nr_workers;	/* L: total number of workers */
@@ -583,7 +586,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 static bool __need_more_worker(struct worker_pool *pool)
 {
 	return !atomic_read(get_pool_nr_running(pool)) ||
-		pool->gcwq->flags & GCWQ_HIGHPRI_PENDING;
+		(pool->flags & POOL_HIGHPRI_PENDING);
 }
 
 /*
@@ -612,7 +615,7 @@ static bool keep_working(struct worker_pool *pool)
 
 	return !list_empty(&pool->worklist) &&
 		(atomic_read(nr_running) <= 1 ||
-		 pool->gcwq->flags & GCWQ_HIGHPRI_PENDING);
+		 (pool->flags & POOL_HIGHPRI_PENDING));
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -625,13 +628,13 @@ static bool need_to_create_worker(struct worker_pool *pool)
 static bool need_to_manage_workers(struct worker_pool *pool)
 {
 	return need_to_create_worker(pool) ||
-		pool->gcwq->flags & GCWQ_MANAGE_WORKERS;
+		(pool->flags & POOL_MANAGE_WORKERS);
 }
 
 /* Do we have too many workers and should some go away? */
 static bool too_many_workers(struct worker_pool *pool)
 {
-	bool managing = pool->gcwq->flags & GCWQ_MANAGING_WORKERS;
+	bool managing = pool->flags & POOL_MANAGING_WORKERS;
 	int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
 	int nr_busy = pool->nr_workers - nr_idle;
 
@@ -889,7 +892,7 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
  * position for the work.  If @cwq is for HIGHPRI wq, the work is
  * queued at the head of the queue but in FIFO order with respect to
  * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @pool that
+ * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
  * there are HIGHPRI works pending.
  *
  * CONTEXT:
@@ -913,7 +916,7 @@ static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
 			break;
 	}
 
-	pool->gcwq->flags |= GCWQ_HIGHPRI_PENDING;
+	pool->flags |= POOL_HIGHPRI_PENDING;
 	return &twork->entry;
 }
 
@@ -1500,7 +1503,7 @@ static void idle_worker_timeout(unsigned long __pool)
 			mod_timer(&pool->idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
-			gcwq->flags |= GCWQ_MANAGE_WORKERS;
+			pool->flags |= POOL_MANAGE_WORKERS;
 			wake_up_worker(pool);
 		}
 	}
@@ -1680,11 +1683,11 @@ static bool manage_workers(struct worker *worker)
 	struct global_cwq *gcwq = pool->gcwq;
 	bool ret = false;
 
-	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
+	if (pool->flags & POOL_MANAGING_WORKERS)
 		return ret;
 
-	gcwq->flags &= ~GCWQ_MANAGE_WORKERS;
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGE_WORKERS;
+	pool->flags |= POOL_MANAGING_WORKERS;
 
 	/*
 	 * Destroy and then create so that may_start_working() is true
@@ -1693,7 +1696,7 @@ static bool manage_workers(struct worker *worker)
 	ret |= maybe_destroy_workers(pool);
 	ret |= maybe_create_worker(pool);
 
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/*
 	 * The trustee might be waiting to take over the manager
@@ -1872,7 +1875,7 @@ __acquires(&gcwq->lock)
 	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
-	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
+	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
 		struct work_struct *nwork = list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
@@ -1880,7 +1883,7 @@ __acquires(&gcwq->lock)
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(pool);
 		else
-			gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
+			pool->flags &= ~POOL_HIGHPRI_PENDING;
 	}
 
 	/*
@@ -3360,10 +3363,10 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
 	BUG_ON(rc < 0);
 
-	gcwq->flags |= GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
 
 	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
@@ -3487,7 +3490,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3604,7 +3607,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		spin_unlock_irq(&gcwq->lock);
 		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
-		gcwq->flags |= GCWQ_MANAGE_WORKERS;
+		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
 		start_worker(gcwq->pool.first_idle);
 		gcwq->pool.first_idle = NULL;
 		break;
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e700dcc..9cbf3bc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return nr_running[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e700dcc..9cbf3bc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return nr_running[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

Signed-off-by: Tejun Heo <tj@kernel.org>
---
 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index e700dcc..9cbf3bc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return nr_running[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-09 18:41 ` Tejun Heo
  (?)
@ 2012-07-09 18:41   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9cbf3bc..e7f26cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return nr_running[0];
+	return nr_running[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tejun Heo

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9cbf3bc..e7f26cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return nr_running[0];
+	return nr_running[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-09 18:41   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-09 18:41 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	Tejun Heo, xfs, joshhunt00, davem, vgoyal, johan.hedberg

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9cbf3bc..e7f26cb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return nr_running[0];
+	return nr_running[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-10  4:48     ` Namhyung Kim
  -1 siblings, 0 replies; 96+ messages in thread
From: Namhyung Kim @ 2012-07-10  4:48 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hi, Tejun

Just nitpicks..


On Mon,  9 Jul 2012 11:41:51 -0700, Tejun Heo wrote:
> Move worklist and all worker management fields from global_cwq into
> the new struct worker_pool.  worker_pool points back to the containing
> gcwq.  worker and cpu_workqueue_struct are updated to point to
> worker_pool instead of gcwq too.
>
> This change is mechanical and doesn't introduce any functional
> difference other than rearranging of fields and an added level of
> indirection in some places.  This is to prepare for multiple pools per
> gcwq.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
>  include/trace/events/workqueue.h |    2 +-
>  kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
>  2 files changed, 118 insertions(+), 100 deletions(-)
>
> diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
> index 4018f50..f28d1b6 100644
> --- a/include/trace/events/workqueue.h
> +++ b/include/trace/events/workqueue.h
> @@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
>  		__entry->function	= work->func;
>  		__entry->workqueue	= cwq->wq;
>  		__entry->req_cpu	= req_cpu;
> -		__entry->cpu		= cwq->gcwq->cpu;
> +		__entry->cpu		= cwq->pool->gcwq->cpu;
>  	),
>  
>  	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 27637c2..bc43a0c 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -115,6 +115,7 @@ enum {
>   */
>  
>  struct global_cwq;
> +struct worker_pool;
>  
>  /*
>   * The poor guys doing the actual heavy lifting.  All on-duty workers
> @@ -131,7 +132,7 @@ struct worker {
>  	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
>  	struct list_head	scheduled;	/* L: scheduled works */
>  	struct task_struct	*task;		/* I: worker task */
> -	struct global_cwq	*gcwq;		/* I: the associated gcwq */
> +	struct worker_pool	*pool;		/* I: the associated pool */
>  	/* 64 bytes boundary on 64bit, 32 on 32bit */
>  	unsigned long		last_active;	/* L: last active timestamp */
>  	unsigned int		flags;		/* X: flags */
> @@ -139,6 +140,21 @@ struct worker {
>  	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
>  };
>  
> +struct worker_pool {
> +	struct global_cwq	*gcwq;		/* I: the owning gcwq */
> +
> +	struct list_head	worklist;	/* L: list of pending works */
> +	int			nr_workers;	/* L: total number of workers */
> +	int			nr_idle;	/* L: currently idle ones */
> +
> +	struct list_head	idle_list;	/* X: list of idle workers */
> +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */

What is 'dworkers'?


> +
> +	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker		*first_idle;	/* L: first idle worker */
> +};
> +
>  /*
>   * Global per-cpu workqueue.  There's one and only one for each cpu
>   * and all works are queued and processed here regardless of their
> @@ -146,27 +162,18 @@ struct worker {
>   */
>  struct global_cwq {
>  	spinlock_t		lock;		/* the gcwq lock */
> -	struct list_head	worklist;	/* L: list of pending works */
>  	unsigned int		cpu;		/* I: the associated cpu */
>  	unsigned int		flags;		/* L: GCWQ_* flags */
>  
> -	int			nr_workers;	/* L: total number of workers */
> -	int			nr_idle;	/* L: currently idle ones */
> -
> -	/* workers are chained either in the idle_list or busy_hash */
> -	struct list_head	idle_list;	/* X: list of idle workers */
> +	/* workers are chained either in busy_head or pool idle_list */

s/busy_head/busy_hash/ ?

Thanks,
Namhyung


>  	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
>  						/* L: hash of busy workers */
>  
> -	struct timer_list	idle_timer;	/* L: worker idle timeout */
> -	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> -
> -	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker_pool	pool;		/* the worker pools */
>  
>  	struct task_struct	*trustee;	/* L: for gcwq shutdown */
>  	unsigned int		trustee_state;	/* L: trustee state */
>  	wait_queue_head_t	trustee_wait;	/* trustee wait */
> -	struct worker		*first_idle;	/* L: first idle worker */
>  } ____cacheline_aligned_in_smp;
>  
>  /*

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-10  4:48     ` Namhyung Kim
  0 siblings, 0 replies; 96+ messages in thread
From: Namhyung Kim @ 2012-07-10  4:48 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hi, Tejun

Just nitpicks..


On Mon,  9 Jul 2012 11:41:51 -0700, Tejun Heo wrote:
> Move worklist and all worker management fields from global_cwq into
> the new struct worker_pool.  worker_pool points back to the containing
> gcwq.  worker and cpu_workqueue_struct are updated to point to
> worker_pool instead of gcwq too.
>
> This change is mechanical and doesn't introduce any functional
> difference other than rearranging of fields and an added level of
> indirection in some places.  This is to prepare for multiple pools per
> gcwq.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
>  include/trace/events/workqueue.h |    2 +-
>  kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
>  2 files changed, 118 insertions(+), 100 deletions(-)
>
> diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
> index 4018f50..f28d1b6 100644
> --- a/include/trace/events/workqueue.h
> +++ b/include/trace/events/workqueue.h
> @@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
>  		__entry->function	= work->func;
>  		__entry->workqueue	= cwq->wq;
>  		__entry->req_cpu	= req_cpu;
> -		__entry->cpu		= cwq->gcwq->cpu;
> +		__entry->cpu		= cwq->pool->gcwq->cpu;
>  	),
>  
>  	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 27637c2..bc43a0c 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -115,6 +115,7 @@ enum {
>   */
>  
>  struct global_cwq;
> +struct worker_pool;
>  
>  /*
>   * The poor guys doing the actual heavy lifting.  All on-duty workers
> @@ -131,7 +132,7 @@ struct worker {
>  	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
>  	struct list_head	scheduled;	/* L: scheduled works */
>  	struct task_struct	*task;		/* I: worker task */
> -	struct global_cwq	*gcwq;		/* I: the associated gcwq */
> +	struct worker_pool	*pool;		/* I: the associated pool */
>  	/* 64 bytes boundary on 64bit, 32 on 32bit */
>  	unsigned long		last_active;	/* L: last active timestamp */
>  	unsigned int		flags;		/* X: flags */
> @@ -139,6 +140,21 @@ struct worker {
>  	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
>  };
>  
> +struct worker_pool {
> +	struct global_cwq	*gcwq;		/* I: the owning gcwq */
> +
> +	struct list_head	worklist;	/* L: list of pending works */
> +	int			nr_workers;	/* L: total number of workers */
> +	int			nr_idle;	/* L: currently idle ones */
> +
> +	struct list_head	idle_list;	/* X: list of idle workers */
> +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */

What is 'dworkers'?


> +
> +	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker		*first_idle;	/* L: first idle worker */
> +};
> +
>  /*
>   * Global per-cpu workqueue.  There's one and only one for each cpu
>   * and all works are queued and processed here regardless of their
> @@ -146,27 +162,18 @@ struct worker {
>   */
>  struct global_cwq {
>  	spinlock_t		lock;		/* the gcwq lock */
> -	struct list_head	worklist;	/* L: list of pending works */
>  	unsigned int		cpu;		/* I: the associated cpu */
>  	unsigned int		flags;		/* L: GCWQ_* flags */
>  
> -	int			nr_workers;	/* L: total number of workers */
> -	int			nr_idle;	/* L: currently idle ones */
> -
> -	/* workers are chained either in the idle_list or busy_hash */
> -	struct list_head	idle_list;	/* X: list of idle workers */
> +	/* workers are chained either in busy_head or pool idle_list */

s/busy_head/busy_hash/ ?

Thanks,
Namhyung


>  	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
>  						/* L: hash of busy workers */
>  
> -	struct timer_list	idle_timer;	/* L: worker idle timeout */
> -	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> -
> -	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker_pool	pool;		/* the worker pools */
>  
>  	struct task_struct	*trustee;	/* L: for gcwq shutdown */
>  	unsigned int		trustee_state;	/* L: trustee state */
>  	wait_queue_head_t	trustee_wait;	/* trustee wait */
> -	struct worker		*first_idle;	/* L: first idle worker */
>  } ____cacheline_aligned_in_smp;
>  
>  /*

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-10  4:48     ` Namhyung Kim
  0 siblings, 0 replies; 96+ messages in thread
From: Namhyung Kim @ 2012-07-10  4:48 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hi, Tejun

Just nitpicks..


On Mon,  9 Jul 2012 11:41:51 -0700, Tejun Heo wrote:
> Move worklist and all worker management fields from global_cwq into
> the new struct worker_pool.  worker_pool points back to the containing
> gcwq.  worker and cpu_workqueue_struct are updated to point to
> worker_pool instead of gcwq too.
>
> This change is mechanical and doesn't introduce any functional
> difference other than rearranging of fields and an added level of
> indirection in some places.  This is to prepare for multiple pools per
> gcwq.
>
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
>  include/trace/events/workqueue.h |    2 +-
>  kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
>  2 files changed, 118 insertions(+), 100 deletions(-)
>
> diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
> index 4018f50..f28d1b6 100644
> --- a/include/trace/events/workqueue.h
> +++ b/include/trace/events/workqueue.h
> @@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
>  		__entry->function	= work->func;
>  		__entry->workqueue	= cwq->wq;
>  		__entry->req_cpu	= req_cpu;
> -		__entry->cpu		= cwq->gcwq->cpu;
> +		__entry->cpu		= cwq->pool->gcwq->cpu;
>  	),
>  
>  	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 27637c2..bc43a0c 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -115,6 +115,7 @@ enum {
>   */
>  
>  struct global_cwq;
> +struct worker_pool;
>  
>  /*
>   * The poor guys doing the actual heavy lifting.  All on-duty workers
> @@ -131,7 +132,7 @@ struct worker {
>  	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
>  	struct list_head	scheduled;	/* L: scheduled works */
>  	struct task_struct	*task;		/* I: worker task */
> -	struct global_cwq	*gcwq;		/* I: the associated gcwq */
> +	struct worker_pool	*pool;		/* I: the associated pool */
>  	/* 64 bytes boundary on 64bit, 32 on 32bit */
>  	unsigned long		last_active;	/* L: last active timestamp */
>  	unsigned int		flags;		/* X: flags */
> @@ -139,6 +140,21 @@ struct worker {
>  	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
>  };
>  
> +struct worker_pool {
> +	struct global_cwq	*gcwq;		/* I: the owning gcwq */
> +
> +	struct list_head	worklist;	/* L: list of pending works */
> +	int			nr_workers;	/* L: total number of workers */
> +	int			nr_idle;	/* L: currently idle ones */
> +
> +	struct list_head	idle_list;	/* X: list of idle workers */
> +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */

What is 'dworkers'?


> +
> +	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker		*first_idle;	/* L: first idle worker */
> +};
> +
>  /*
>   * Global per-cpu workqueue.  There's one and only one for each cpu
>   * and all works are queued and processed here regardless of their
> @@ -146,27 +162,18 @@ struct worker {
>   */
>  struct global_cwq {
>  	spinlock_t		lock;		/* the gcwq lock */
> -	struct list_head	worklist;	/* L: list of pending works */
>  	unsigned int		cpu;		/* I: the associated cpu */
>  	unsigned int		flags;		/* L: GCWQ_* flags */
>  
> -	int			nr_workers;	/* L: total number of workers */
> -	int			nr_idle;	/* L: currently idle ones */
> -
> -	/* workers are chained either in the idle_list or busy_hash */
> -	struct list_head	idle_list;	/* X: list of idle workers */
> +	/* workers are chained either in busy_head or pool idle_list */

s/busy_head/busy_hash/ ?

Thanks,
Namhyung


>  	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
>  						/* L: hash of busy workers */
>  
> -	struct timer_list	idle_timer;	/* L: worker idle timeout */
> -	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> -
> -	struct ida		worker_ida;	/* L: for worker IDs */
> +	struct worker_pool	pool;		/* the worker pools */
>  
>  	struct task_struct	*trustee;	/* L: for gcwq shutdown */
>  	unsigned int		trustee_state;	/* L: trustee state */
>  	wait_queue_head_t	trustee_wait;	/* trustee wait */
> -	struct worker		*first_idle;	/* L: first idle worker */
>  } ____cacheline_aligned_in_smp;
>  
>  /*

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-10 23:30       ` Tony Luck
  -1 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-10 23:30 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
>          */
>         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
>                      pool->nr_workers == pool->nr_idle &&
> -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> +                    atomic_read(get_pool_nr_running(pool)));
>  }

Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
haven't bisected ... just noticed  that two patches in this series tinker
with lines in this check. next-20120706 didn't generate the WARN.

-Tony

Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1217 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6e870>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaad0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad020>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bda70>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace 9501f2472a75a227 ]---

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-10 23:30       ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-10 23:30 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj@kernel.org> wrote:
> @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
>          */
>         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
>                      pool->nr_workers == pool->nr_idle &&
> -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> +                    atomic_read(get_pool_nr_running(pool)));
>  }

Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
haven't bisected ... just noticed  that two patches in this series tinker
with lines in this check. next-20120706 didn't generate the WARN.

-Tony

Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1217 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6e870>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaad0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad020>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bda70>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace 9501f2472a75a227 ]---

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-10 23:30       ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-10 23:30 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj@kernel.org> wrote:
> @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
>          */
>         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
>                      pool->nr_workers == pool->nr_idle &&
> -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> +                    atomic_read(get_pool_nr_running(pool)));
>  }

Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
haven't bisected ... just noticed  that two patches in this series tinker
with lines in this check. next-20120706 didn't generate the WARN.

-Tony

Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1217 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6e870>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaad0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad020>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bda70>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace 9501f2472a75a227 ]---

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-12 13:06       ` Fengguang Wu
  -1 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-12 13:06 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

[-- Attachment #1: Type: message/external-body, Size: 538 bytes --]

[-- Attachment #2: dmesg-kvm-slim-4225-2012-07-12-19-15-31 --]
[-- Type: text/plain, Size: 28151 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-08414-g9645fff (kbuild@snb) (gcc version 4.7.0 (Debian 4.7.0-11) ) #15 SMP Thu Jul 12 19:12:36 CST 2012
[    0.000000] Command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000093bff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000093c00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000fdac0-0x000fdacf] mapped at [ffff8800000fdac0]
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff88000008d000] 8d000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 127940(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c6ce01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x00092fff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65408
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3901 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 0000000000093000 - 0000000000094000
[    0.000000] PM: Registered nosave memory: 0000000000094000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfeffbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64378
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 199892k/262132k available (4847k kernel code, 500k absent, 61740k reserved, 7791k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 2999.462 MHz processor
[    0.001999] Calibrating delay loop (skipped) preset value.. 5998.92 BogoMIPS (lpj=2999462)
[    0.003010] pid_max: default: 32768 minimum: 301
[    0.005213] Security Framework initialized
[    0.006075] Yama: becoming mindful.
[    0.008740] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.011850] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.014061] Mount-cache hash table entries: 256
[    0.018011] Initializing cgroup subsys debug
[    0.019012] Initializing cgroup subsys freezer
[    0.020010] Initializing cgroup subsys perf_event
[    0.021165] Disabled fast string operations
[    0.024612] ftrace: allocating 11013 entries in 44 pages
[    0.033344] Getting VERSION: 50014
[    0.034015] Getting VERSION: 50014
[    0.035014] Getting ID: 0
[    0.035731] Getting ID: ff000000
[    0.036014] Getting LVT0: 8700
[    0.037011] Getting LVT1: 8400
[    0.038084] enabled ExtINT on CPU#0
[    0.040907] ENABLING IO-APIC IRQs
[    0.041011] init IO_APIC IRQs
[    0.042007]  apic 2 pin 0 not connected
[    0.043041] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.045035] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.047032] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.049046] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.051027] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.053027] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.055027] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.057026] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.059037] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.062029] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.064029] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.066023] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.068026] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.070025] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.073004] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.075020]  apic 2 pin 16 not connected
[    0.075999]  apic 2 pin 17 not connected
[    0.076999]  apic 2 pin 18 not connected
[    0.077999]  apic 2 pin 19 not connected
[    0.078999]  apic 2 pin 20 not connected
[    0.079999]  apic 2 pin 21 not connected
[    0.080999]  apic 2 pin 22 not connected
[    0.081999]  apic 2 pin 23 not connected
[    0.083158] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.084998] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.087425] Using local APIC timer interrupts.
[    0.087425] calibrating APIC timer ...
[    0.090992] ... lapic delta = 6249032
[    0.090992] ..... delta 6249032
[    0.090992] ..... mult: 268434682
[    0.090992] ..... calibration result: 999845
[    0.090992] ..... CPU clock speed is 2998.0997 MHz.
[    0.090992] ..... host bus clock speed is 999.0845 MHz.
[    0.090992] ... verify APIC timer
[    0.201346] ... jiffies delta = 100
[    0.201984] ... jiffies result ok
[    0.203030] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.207035] ------------[ cut here ]------------
[    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
[    0.207977] Modules linked in:
[    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
[    0.207977] Call Trace:
[    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
[    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
[    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
[    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
[    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
[    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
[    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
[    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
[    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
[    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
[    0.207977] ---[ end trace 5eb91373aeac2b15 ]---
[    0.210519] Testing tracer nop: PASSED
[    0.212314] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.215909] SMP alternatives: lockdep: fixing up alternatives
[    0.216992] smpboot: Booting Node   0, Processors  #1 OK
[    0.001999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.001999] masked ExtINT on CPU#1
[    0.001999] Disabled fast string operations
[    0.233973] TSC synchronization [CPU#0 -> CPU#1]:
[    0.233973] Measured 1551 cycles TSC warp between CPUs, turning off TSC clock.
[    0.233973] tsc: Marking TSC unstable due to check_tsc_sync_source failed
[    0.244338] Brought up 2 CPUs
[    0.244988] ----------------
[    0.245746] | NMI testsuite:
[    0.245976] --------------------
[    0.246976]   remote IPI:  ok  |
[    0.251287]    local IPI:  ok  |
[    0.256982] --------------------
[    0.257844] Good, all   2 testcases passed! |
[    0.258974] ---------------------------------
[    0.259976] smpboot: Total of 2 processors activated (11997.84 BogoMIPS)
[    0.262415] CPU0 attaching sched-domain:
[    0.262979]  domain 0: span 0-1 level CPU
[    0.264443]   groups: 0 (cpu_power = 1023) 1
[    0.265676] CPU1 attaching sched-domain:
[    0.265976]  domain 0: span 0-1 level CPU
[    0.267973]   groups: 1 0 (cpu_power = 1023)
[    0.277762] devtmpfs: initialized
[    0.278040] device: 'platform': device_add
[    0.279040] PM: Adding info for No Bus:platform
[    0.281100] bus: 'platform': registered
[    0.282097] bus: 'cpu': registered
[    0.282977] device: 'cpu': device_add
[    0.288670] PM: Adding info for No Bus:cpu
[    0.289057] bus: 'memory': registered
[    0.289975] device: 'memory': device_add
[    0.290996] PM: Adding info for No Bus:memory
[    0.293022] device: 'memory0': device_add
[    0.294004] bus: 'memory': add device memory0
[    0.301519] PM: Adding info for memory:memory0
[    0.302133] device: 'memory1': device_add
[    0.302977] bus: 'memory': add device memory1
[    0.304997] PM: Adding info for memory:memory1
[    0.322930] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.323973] device class 'regulator': registering
[    0.326225] Registering platform device 'reg-dummy'. Parent at platform
[    0.335503] device: 'reg-dummy': device_add
[    0.335986] bus: 'platform': add device reg-dummy
[    0.337979] PM: Adding info for platform:reg-dummy
[    0.339011] bus: 'platform': add driver reg-dummy
[    0.339974] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.341966] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.352600] device: 'regulator.0': device_add
[    0.353991] PM: Adding info for No Bus:regulator.0
[    0.355105] dummy: 
[    0.356032] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.357006] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.365639] RTC time: 11:15:27, date: 07/12/12
[    0.367178] NET: Registered protocol family 16
[    0.368221] device class 'bdi': registering
[    0.369003] device class 'tty': registering
[    0.370005] bus: 'node': registered
[    0.370963] device: 'node': device_add
[    0.378514] PM: Adding info for No Bus:node
[    0.379975] device class 'dma': registering
[    0.381072] device: 'node0': device_add
[    0.381964] bus: 'node': add device node0
[    0.382983] PM: Adding info for node:node0
[    0.384059] device: 'cpu0': device_add
[    0.391500] bus: 'cpu': add device cpu0
[    0.391982] PM: Adding info for cpu:cpu0
[    0.393007] device: 'cpu1': device_add
[    0.394015] bus: 'cpu': add device cpu1
[    0.394982] PM: Adding info for cpu:cpu1
[    0.395990] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.397953] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.399953] mtrr: probably your BIOS does not setup all CPUs.
[    0.400953] mtrr: corrected configuration.
[    0.414025] device: 'default': device_add
[    0.415055] PM: Adding info for No Bus:default
[    0.418486] bio: create slab <bio-0> at 0
[    0.419082] device class 'block': registering
[    0.421070] device class 'misc': registering
[    0.422218] bus: 'serio': registered
[    0.422962] device class 'input': registering
[    0.426047] device class 'power_supply': registering
[    0.426983] device class 'watchdog': registering
[    0.428039] device class 'net': registering
[    0.430169] device: 'lo': device_add
[    0.431185] PM: Adding info for No Bus:lo
[    0.431604] Switching to clocksource kvm-clock
[    0.436812] Warning: could not register all branches stats
[    0.438281] Warning: could not register annotated branches stats
[    0.561660] device class 'mem': registering
[    0.562848] device: 'mem': device_add
[    0.564244] PM: Adding info for No Bus:mem
[    0.565406] device: 'kmem': device_add
[    0.566698] PM: Adding info for No Bus:kmem
[    0.567942] device: 'null': device_add
[    0.569141] PM: Adding info for No Bus:null
[    0.570280] device: 'zero': device_add
[    0.571499] PM: Adding info for No Bus:zero
[    0.572649] device: 'full': device_add
[    0.573805] PM: Adding info for No Bus:full
[    0.574929] device: 'random': device_add
[    0.576239] PM: Adding info for No Bus:random
[    0.577487] device: 'urandom': device_add
[    0.578784] PM: Adding info for No Bus:urandom
[    0.579994] device: 'kmsg': device_add
[    0.581186] PM: Adding info for No Bus:kmsg
[    0.582333] device: 'tty': device_add
[    0.583552] PM: Adding info for No Bus:tty
[    0.584866] device: 'console': device_add
[    0.586191] PM: Adding info for No Bus:console
[    0.587491] NET: Registered protocol family 1
[    0.589321] Unpacking initramfs...
[    2.786882] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    2.871676] DMA-API: preallocated 32768 debug entries
[    2.873030] DMA-API: debugging enabled by kernel config
[    2.874668] Registering platform device 'rtc_cmos'. Parent at platform
[    2.876377] device: 'rtc_cmos': device_add
[    2.877481] bus: 'platform': add device rtc_cmos
[    2.878840] PM: Adding info for platform:rtc_cmos
[    2.880110] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    2.882497] device: 'snapshot': device_add
[    2.883820] PM: Adding info for No Bus:snapshot
[    2.885128] bus: 'clocksource': registered
[    2.886236] device: 'clocksource': device_add
[    2.887415] PM: Adding info for No Bus:clocksource
[    2.888714] device: 'clocksource0': device_add
[    2.889895] bus: 'clocksource': add device clocksource0
[    2.891313] PM: Adding info for clocksource:clocksource0
[    2.892734] bus: 'platform': add driver alarmtimer
[    2.894050] Registering platform device 'alarmtimer'. Parent at platform
[    2.895808] device: 'alarmtimer': device_add
[    2.896943] bus: 'platform': add device alarmtimer
[    2.898261] PM: Adding info for platform:alarmtimer
[    2.899553] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    2.901860] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    2.904029] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    2.905872] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    2.908139] audit: initializing netlink socket (disabled)
[    2.909625] type=2000 audit(1342091729.908:1): initialized
[    2.923090] Testing tracer function: PASSED
[    3.083849] Testing dynamic ftrace: PASSED
[    3.347420] Testing dynamic ftrace ops #1: [    3.374759] kwatchdog (24) used greatest stack depth: 6584 bytes left
(1 0 1 1 0) (1 1 2 1 0) 

[-- Attachment #3: config-3.5.0-rc6-08414-g9645fff --]
[-- Type: text/plain, Size: 50953 bytes --]

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.5.0-rc6 Kernel Configuration
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
# CONFIG_GENERIC_ISA_DMA is not set
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_GPIO=y
# CONFIG_ARCH_MAY_HAVE_PC_FDC is not set
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_CONSTRUCTORS=y
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
# CONFIG_EXPERIMENTAL is not set
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
# CONFIG_KERNEL_GZIP is not set
CONFIG_KERNEL_BZIP2=y
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_FHANDLE=y
CONFIG_TASKSTATS=y
# CONFIG_TASK_DELAY_ACCT is not set
# CONFIG_TASK_XACCT is not set
CONFIG_AUDIT=y
# CONFIG_AUDITSYSCALL is not set
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
# CONFIG_IRQ_DOMAIN_DEBUG is not set
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
CONFIG_TREE_RCU_TRACE=y
CONFIG_IKCONFIG=y
# CONFIG_IKCONFIG_PROC is not set
CONFIG_LOG_BUF_SHIFT=17
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
# CONFIG_CGROUP_CPUACCT is not set
# CONFIG_RESOURCE_COUNTERS is not set
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
# CONFIG_NAMESPACES is not set
CONFIG_SCHED_AUTOGROUP=y
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
# CONFIG_RD_XZ is not set
CONFIG_RD_LZO=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_EXPERT=y
# CONFIG_UID16 is not set
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
# CONFIG_EPOLL is not set
# CONFIG_SIGNALFD is not set
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
# CONFIG_SHMEM is not set
CONFIG_AIO=y
CONFIG_EMBEDDED=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_PERF_USE_VMALLOC=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
CONFIG_DEBUG_PERF_USE_VMALLOC=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y

#
# GCOV-based kernel profiling
#
CONFIG_GCOV_KERNEL=y
CONFIG_GCOV_PROFILE_ALL=y
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
# CONFIG_MODULE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
# CONFIG_SGI_PARTITION is not set
CONFIG_ULTRIX_PARTITION=y
# CONFIG_SUN_PARTITION is not set
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=m
# CONFIG_IOSCHED_CFQ is not set
CONFIG_DEFAULT_NOOP=y
CONFIG_DEFAULT_IOSCHED="noop"
# CONFIG_INLINE_SPIN_TRYLOCK is not set
# CONFIG_INLINE_SPIN_TRYLOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK is not set
# CONFIG_INLINE_SPIN_LOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK_IRQ is not set
# CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set
CONFIG_UNINLINE_SPIN_UNLOCK=y
# CONFIG_INLINE_SPIN_UNLOCK_BH is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQ is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_READ_TRYLOCK is not set
# CONFIG_INLINE_READ_LOCK is not set
# CONFIG_INLINE_READ_LOCK_BH is not set
# CONFIG_INLINE_READ_LOCK_IRQ is not set
# CONFIG_INLINE_READ_LOCK_IRQSAVE is not set
# CONFIG_INLINE_READ_UNLOCK is not set
# CONFIG_INLINE_READ_UNLOCK_BH is not set
# CONFIG_INLINE_READ_UNLOCK_IRQ is not set
# CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_WRITE_TRYLOCK is not set
# CONFIG_INLINE_WRITE_LOCK is not set
# CONFIG_INLINE_WRITE_LOCK_BH is not set
# CONFIG_INLINE_WRITE_LOCK_IRQ is not set
# CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set
# CONFIG_INLINE_WRITE_UNLOCK is not set
# CONFIG_INLINE_WRITE_UNLOCK_BH is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQ is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set
# CONFIG_MUTEX_SPIN_ON_OWNER is not set
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
# CONFIG_KVMTOOL_TEST_ENABLE is not set
CONFIG_PARAVIRT_GUEST=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
# CONFIG_XEN is not set
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_KVM_CLOCK=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_XADD=y
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
# CONFIG_CPU_SUP_AMD is not set
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
# CONFIG_DMI is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set
# CONFIG_X86_MCE is not set
# CONFIG_I8K is not set
# CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
# CONFIG_DIRECT_GBPAGES is not set
CONFIG_NUMA=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
# CONFIG_SPARSEMEM_VMEMMAP is not set
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=999999
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_CLEANCACHE=y
# CONFIG_FRONTSWAP is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
# CONFIG_X86_PAT is not set
# CONFIG_ARCH_RANDOM is not set
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_AUTOSLEEP=y
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
# CONFIG_INTEL_IDLE is not set

#
# Memory power savings
#

#
# Bus options (PCI etc.)
#
# CONFIG_PCI is not set
# CONFIG_ARCH_SUPPORTS_MSI is not set
# CONFIG_ISA_DMA_API is not set
CONFIG_PCCARD=m
CONFIG_PCMCIA=m

#
# PC-card bridges
#

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
# CONFIG_BINFMT_MISC is not set
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y

#
# Networking options
#
CONFIG_PACKET=m
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
# CONFIG_NET_KEY is not set
# CONFIG_INET is not set
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETFILTER is not set
CONFIG_ATM=m
# CONFIG_ATM_LANE is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_PHONET is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
CONFIG_NET_SCH_HTB=m
# CONFIG_NET_SCH_HFSC is not set
CONFIG_NET_SCH_ATM=m
# CONFIG_NET_SCH_PRIO is not set
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
# CONFIG_NET_SCH_SFB is not set
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
CONFIG_NET_SCH_DSMARK=m
# CONFIG_NET_SCH_NETEM is not set
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
# CONFIG_NET_SCH_CODEL is not set
CONFIG_NET_SCH_FQ_CODEL=m
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
# CONFIG_CLS_U32_PERF is not set
CONFIG_CLS_U32_MARK=y
# CONFIG_NET_CLS_RSVP is not set
CONFIG_NET_CLS_RSVP6=m
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
# CONFIG_NET_EMATCH_TEXT is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_BQL=y
CONFIG_BPF_JIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_HAMRADIO=y

#
# Packet Radio protocols
#
# CONFIG_AX25 is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_SPY=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REG_DEBUG=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_DEBUGFS is not set
# CONFIG_CFG80211_INTERNAL_REGDB is not set
CONFIG_CFG80211_WEXT=y
CONFIG_LIB80211=m
CONFIG_LIB80211_DEBUG=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_PID=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_MINSTREL_HT=y
# CONFIG_MAC80211_RC_DEFAULT_PID is not set
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_WIMAX=m
CONFIG_WIMAX_DEBUG_LEVEL=8
# CONFIG_RFKILL is not set
CONFIG_RFKILL_REGULATOR=m
CONFIG_NET_9P=m
# CONFIG_NET_9P_VIRTIO is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
# CONFIG_DEVTMPFS_MOUNT is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=m
# CONFIG_FIRMWARE_IN_KERNEL is not set
CONFIG_EXTRA_FIRMWARE=""
CONFIG_DEBUG_DRIVER=y
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_CONNECTOR is not set
# CONFIG_MTD is not set
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_PC_PCMCIA=m
# CONFIG_PARPORT_GSC is not set
CONFIG_PARPORT_AX88796=m
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_BLK_DEV=y
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
# CONFIG_BLK_DEV_LOOP is not set

#
# DRBD disabled because PROC_FS, INET or CONNECTOR not selected
#
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_BLK_DEV_HD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
CONFIG_ENCLOSURE_SERVICES=m
# CONFIG_APDS9802ALS is not set
CONFIG_ISL29003=m
CONFIG_ISL29020=m
# CONFIG_SENSORS_TSL2550 is not set
CONFIG_SENSORS_BH1780=m
CONFIG_SENSORS_BH1770=m
# CONFIG_SENSORS_APDS990X is not set
CONFIG_HMC6352=m
# CONFIG_VMWARE_BALLOON is not set
CONFIG_BMP085=y
CONFIG_BMP085_I2C=m
# CONFIG_USB_SWITCH_FSA9480 is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
CONFIG_EEPROM_LEGACY=m
# CONFIG_EEPROM_93CX6 is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
CONFIG_ALTERA_STAPL=m
CONFIG_HAVE_IDE=y
CONFIG_IDE=m

#
# Please see Documentation/ide/ide.txt for help/info on IDE drives
#
CONFIG_IDE_XFER_MODE=y
CONFIG_IDE_TIMINGS=y
CONFIG_IDE_ATAPI=y
# CONFIG_BLK_DEV_IDE_SATA is not set
# CONFIG_IDE_GD is not set
# CONFIG_BLK_DEV_IDECS is not set
# CONFIG_BLK_DEV_IDECD is not set
CONFIG_BLK_DEV_IDETAPE=m
CONFIG_IDE_TASK_IOCTL=y
CONFIG_IDE_PROC_FS=y

#
# IDE chipset support/bugfixes
#
# CONFIG_IDE_GENERIC is not set
# CONFIG_BLK_DEV_PLATFORM is not set
CONFIG_BLK_DEV_CMD640=m
CONFIG_BLK_DEV_CMD640_ENHANCED=y
# CONFIG_BLK_DEV_IDEDMA is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_NETLINK is not set
# CONFIG_SCSI_PROC_FS is not set

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
# CONFIG_SCSI_ENCLOSURE is not set
CONFIG_SCSI_MULTI_LUN=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
# CONFIG_SCSI_SCAN_ASYNC is not set

#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
# CONFIG_SCSI_SAS_HOST_SMP is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_SCSI_LOWLEVEL_PCMCIA=y
# CONFIG_PCMCIA_AHA152X is not set
# CONFIG_PCMCIA_FDOMAIN is not set
CONFIG_PCMCIA_QLOGIC=m
# CONFIG_PCMCIA_SYM53C500 is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_SATA_MV=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ARASAN_CF=m

#
# PIO-only SFF controllers
#
CONFIG_PATA_PCMCIA=m
CONFIG_PATA_PLATFORM=m

#
# Generic fallback / legacy drivers
#
CONFIG_MD=y
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
# CONFIG_DM_CRYPT is not set
# CONFIG_DM_SNAPSHOT is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_UEVENT is not set
CONFIG_TARGET_CORE=m
# CONFIG_TCM_IBLOCK is not set
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
# CONFIG_ISCSI_TARGET is not set
CONFIG_MACINTOSH_DRIVERS=y
# CONFIG_MAC_EMUMOUSEBTN is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_DUMMY=m
CONFIG_EQUALIZER=m
CONFIG_MII=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_TUN is not set
CONFIG_VETH=m
CONFIG_ARCNET=m
CONFIG_ARCNET_1201=m
CONFIG_ARCNET_1051=m
# CONFIG_ARCNET_RAW is not set
# CONFIG_ARCNET_CAP is not set
CONFIG_ARCNET_COM90xx=m
# CONFIG_ARCNET_COM90xxIO is not set
# CONFIG_ARCNET_RIM_I is not set
CONFIG_ARCNET_COM20020=m
CONFIG_ARCNET_COM20020_CS=m
CONFIG_ATM_DRIVERS=y
CONFIG_ATM_DUMMY=m

#
# CAIF transport drivers
#
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_AMD=y
CONFIG_PCMCIA_NMCLAN=m
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
CONFIG_NET_VENDOR_NATSEMI=y
CONFIG_NET_VENDOR_8390=y
CONFIG_PCMCIA_AXNET=m
CONFIG_PCMCIA_PCNET=m
CONFIG_ETHOC=m
# CONFIG_NET_VENDOR_REALTEK is not set
CONFIG_NET_VENDOR_SMSC=y
CONFIG_PCMCIA_SMC91C92=m
CONFIG_NET_VENDOR_STMICRO=y
CONFIG_STMMAC_ETH=m
# CONFIG_STMMAC_PLATFORM is not set
CONFIG_STMMAC_DEBUG_FS=y
# CONFIG_STMMAC_DA is not set
# CONFIG_STMMAC_RING is not set
CONFIG_STMMAC_CHAINED=y
CONFIG_NET_VENDOR_WIZNET=y
CONFIG_WIZNET_W5100=m
CONFIG_WIZNET_W5300=m
# CONFIG_WIZNET_BUS_DIRECT is not set
# CONFIG_WIZNET_BUS_INDIRECT is not set
CONFIG_WIZNET_BUS_ANY=y
# CONFIG_NET_VENDOR_XIRCOM is not set
CONFIG_PHYLIB=m

#
# MII PHY device drivers
#
CONFIG_AMD_PHY=m
# CONFIG_MARVELL_PHY is not set
# CONFIG_DAVICOM_PHY is not set
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
# CONFIG_SMSC_PHY is not set
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
CONFIG_STE10XP=m
# CONFIG_LSI_ET1011C_PHY is not set
CONFIG_MICREL_PHY=m
CONFIG_MDIO_BITBANG=m
CONFIG_MDIO_GPIO=m
# CONFIG_PLIP is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
# CONFIG_PPP_FILTER is not set
# CONFIG_PPPOATM is not set
# CONFIG_PPP_ASYNC is not set
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
# CONFIG_SLIP_COMPRESSED is not set
# CONFIG_SLIP_SMART is not set
# CONFIG_SLIP_MODE_SLIP6 is not set
CONFIG_WLAN=y
# CONFIG_PCMCIA_RAYCS is not set
CONFIG_LIBERTAS_THINFIRM=m
CONFIG_LIBERTAS_THINFIRM_DEBUG=y
# CONFIG_ATMEL is not set
CONFIG_AIRO_CS=m
# CONFIG_MAC80211_HWSIM is not set
CONFIG_ATH_COMMON=m
# CONFIG_ATH_DEBUG is not set
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
# CONFIG_ATH9K_BTCOEX_SUPPORT is not set
CONFIG_ATH9K=m
CONFIG_ATH9K_AHB=y
CONFIG_ATH9K_DEBUGFS=y
# CONFIG_ATH9K_DFS_CERTIFIED is not set
CONFIG_ATH9K_MAC_DEBUG=y
# CONFIG_ATH9K_RATE_CONTROL is not set
# CONFIG_ATH6KL is not set
CONFIG_B43=m
CONFIG_B43_BCMA=y
# CONFIG_B43_BCMA_EXTRA is not set
CONFIG_B43_SSB=y
# CONFIG_B43_PCMCIA is not set
CONFIG_B43_BCMA_PIO=y
CONFIG_B43_PIO=y
# CONFIG_B43_PHY_LP is not set
CONFIG_B43_LEDS=y
CONFIG_B43_HWRNG=y
# CONFIG_B43_DEBUG is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
# CONFIG_B43LEGACY_DEBUG is not set
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
CONFIG_BRCMUTIL=m
CONFIG_BRCMSMAC=m
# CONFIG_BRCMFMAC is not set
# CONFIG_BRCMDBG is not set
# CONFIG_HOSTAP is not set
# CONFIG_LIBERTAS is not set
# CONFIG_HERMES is not set
CONFIG_RT2X00=m
CONFIG_WL_TI=y
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
# CONFIG_MWIFIEX is not set

#
# WiMAX Wireless Broadband devices
#

#
# Enable USB support to see WiMAX USB drivers
#
CONFIG_WAN=y
CONFIG_HDLC=m
# CONFIG_HDLC_RAW is not set
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
# CONFIG_HDLC_FR is not set
# CONFIG_HDLC_PPP is not set

#
# X.25/LAPB support is disabled
#
# CONFIG_DLCI is not set
# CONFIG_SBNI is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
CONFIG_INPUT_MATRIXKMAP=m

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=m
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=m
CONFIG_INPUT_EVBUG=m

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ADP5588=m
CONFIG_KEYBOARD_ADP5589=m
CONFIG_KEYBOARD_ATKBD=y
CONFIG_KEYBOARD_QT1070=m
# CONFIG_KEYBOARD_LKKBD is not set
CONFIG_KEYBOARD_GPIO=m
CONFIG_KEYBOARD_GPIO_POLLED=m
CONFIG_KEYBOARD_TCA6416=m
CONFIG_KEYBOARD_TCA8418=m
CONFIG_KEYBOARD_MATRIX=m
CONFIG_KEYBOARD_LM8323=m
# CONFIG_KEYBOARD_LM8333 is not set
CONFIG_KEYBOARD_MAX7359=m
CONFIG_KEYBOARD_MCS=m
# CONFIG_KEYBOARD_MPR121 is not set
CONFIG_KEYBOARD_NEWTON=m
CONFIG_KEYBOARD_OPENCORES=m
CONFIG_KEYBOARD_STOWAWAY=m
# CONFIG_KEYBOARD_SUNKBD is not set
CONFIG_KEYBOARD_OMAP4=m
CONFIG_KEYBOARD_XTKBD=m
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
CONFIG_JOYSTICK_ANALOG=m
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
CONFIG_JOYSTICK_GF2K=m
CONFIG_JOYSTICK_GRIP=m
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
CONFIG_JOYSTICK_INTERACT=m
# CONFIG_JOYSTICK_SIDEWINDER is not set
CONFIG_JOYSTICK_TMDC=m
CONFIG_JOYSTICK_IFORCE=m
CONFIG_JOYSTICK_IFORCE_232=y
CONFIG_JOYSTICK_WARRIOR=m
CONFIG_JOYSTICK_MAGELLAN=m
CONFIG_JOYSTICK_SPACEORB=m
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
CONFIG_JOYSTICK_TWIDJOY=m
CONFIG_JOYSTICK_ZHENHUA=m
CONFIG_JOYSTICK_DB9=m
CONFIG_JOYSTICK_GAMECON=m
# CONFIG_JOYSTICK_TURBOGRAFX is not set
CONFIG_JOYSTICK_AS5011=m
CONFIG_JOYSTICK_JOYDUMP=m
# CONFIG_JOYSTICK_XPAD is not set
# CONFIG_JOYSTICK_WALKERA0701 is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_CT82C710=m
# CONFIG_SERIO_PARKBD is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_GAMEPORT=m
# CONFIG_GAMEPORT_NS558 is not set
# CONFIG_GAMEPORT_L4 is not set

#
# Character devices
#
# CONFIG_VT is not set
# CONFIG_UNIX98_PTYS is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_TRACE_ROUTER is not set
CONFIG_TRACE_SINK=m
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set

#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
CONFIG_SERIAL_ALTERA_UART=m
CONFIG_SERIAL_ALTERA_UART_MAXPORTS=4
CONFIG_SERIAL_ALTERA_UART_BAUDRATE=115200
# CONFIG_SERIAL_XILINX_PS_UART is not set
CONFIG_TTY_PRINTK=y
# CONFIG_PRINTER is not set
CONFIG_PPDEV=m
CONFIG_HVC_DRIVER=y
CONFIG_VIRTIO_CONSOLE=m
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
CONFIG_HW_RANDOM_TIMERIOMEM=m
# CONFIG_HW_RANDOM_VIA is not set
# CONFIG_HW_RANDOM_VIRTIO is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
# CONFIG_GEN_RTC is not set
CONFIG_R3964=m

#
# PCMCIA character devices
#
CONFIG_SYNCLINK_CS=m
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
CONFIG_IPWIRELESS=m
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=m
CONFIG_MAX_RAW_DEVS=256
CONFIG_HANGCHECK_TIMER=m
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_NSC is not set
# CONFIG_TCG_ATMEL is not set
CONFIG_I2C=m
CONFIG_I2C_BOARDINFO=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
CONFIG_I2C_MUX_GPIO=m
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=m

#
# I2C Hardware Bus support
#

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
CONFIG_I2C_GPIO=m
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
CONFIG_I2C_SIMTEC=m

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PARPORT_LIGHT is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
CONFIG_I2C_DEBUG_BUS=y
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#

#
# PPS generators support
#

#
# PTP clock support
#

#
# Enable Device Drivers -> PPS to see the PTP clock options.
#
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIOLIB=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_MAX730X=m

#
# Memory mapped GPIO drivers:
#
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_IT8761E=m

#
# I2C GPIO expanders:
#
CONFIG_GPIO_MAX7300=m
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_ADP5588 is not set

#
# PCI GPIO expanders:
#

#
# SPI GPIO expanders:
#
CONFIG_GPIO_MCP23S08=m

#
# AC97 GPIO expanders:
#

#
# MODULbus GPIO expanders:
#
CONFIG_W1=m

#
# 1-wire Bus Masters
#
# CONFIG_W1_MASTER_DS1WM is not set
# CONFIG_W1_MASTER_GPIO is not set

#
# 1-wire Slaves
#
CONFIG_W1_SLAVE_THERM=m
# CONFIG_W1_SLAVE_SMEM is not set
CONFIG_W1_SLAVE_DS2408=m
# CONFIG_W1_SLAVE_DS2423 is not set
CONFIG_W1_SLAVE_DS2431=m
CONFIG_W1_SLAVE_DS2433=m
# CONFIG_W1_SLAVE_DS2433_CRC is not set
# CONFIG_W1_SLAVE_DS2760 is not set
CONFIG_W1_SLAVE_DS2780=m
CONFIG_W1_SLAVE_DS2781=m
# CONFIG_W1_SLAVE_DS28E04 is not set
CONFIG_W1_SLAVE_BQ27000=m
CONFIG_POWER_SUPPLY=y
CONFIG_POWER_SUPPLY_DEBUG=y
# CONFIG_PDA_POWER is not set
CONFIG_TEST_POWER=m
CONFIG_BATTERY_DS2780=m
CONFIG_BATTERY_DS2781=m
CONFIG_BATTERY_DS2782=m
CONFIG_BATTERY_SBS=m
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
CONFIG_CHARGER_PCF50633=m
# CONFIG_CHARGER_MAX8903 is not set
CONFIG_CHARGER_LP8727=m
CONFIG_CHARGER_GPIO=m
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=m
CONFIG_HWMON_VID=m
CONFIG_HWMON_DEBUG_CHIP=y

#
# Native drivers
#
CONFIG_SENSORS_ADM1021=m
# CONFIG_SENSORS_ADM1025 is not set
CONFIG_SENSORS_ADM1026=m
# CONFIG_SENSORS_ADM1029 is not set
CONFIG_SENSORS_ADM1031=m
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7475=m
CONFIG_SENSORS_ASC7621=m
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
CONFIG_SENSORS_FSCHMD=m
CONFIG_SENSORS_G760A=m
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
# CONFIG_SENSORS_GPIO_FAN is not set
# CONFIG_SENSORS_IT87 is not set
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_LM63 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
# CONFIG_SENSORS_LM78 is not set
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
# CONFIG_SENSORS_LM85 is not set
CONFIG_SENSORS_LM87=m
# CONFIG_SENSORS_LM90 is not set
CONFIG_SENSORS_LM92=m
# CONFIG_SENSORS_LM93 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
# CONFIG_SENSORS_PC87360 is not set
CONFIG_SENSORS_PC87427=m
# CONFIG_SENSORS_PCF8591 is not set
CONFIG_SENSORS_SHT15=m
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SCH56XX_COMMON=m
# CONFIG_SENSORS_SCH5627 is not set
CONFIG_SENSORS_SCH5636=m
CONFIG_SENSORS_ADS1015=m
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_THMC50 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VT1211=m
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
CONFIG_SENSORS_W83792D=m
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
CONFIG_SENSORS_APPLESMC=m
# CONFIG_SENSORS_MC13783_ADC is not set
CONFIG_THERMAL=m
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_WATCHDOG_NOWAYOUT=y

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
CONFIG_ADVANTECH_WDT=m
CONFIG_SC520_WDT=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
CONFIG_IBMASR=m
CONFIG_WAFER_WDT=m
# CONFIG_IT8712F_WDT is not set
CONFIG_SC1200_WDT=m
CONFIG_PC87413_WDT=m
# CONFIG_60XX_WDT is not set
CONFIG_SBC8360_WDT=m
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
CONFIG_W83697UG_WDT=m
CONFIG_W83877F_WDT=m
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
CONFIG_SSB=m
CONFIG_SSB_BLOCKIO=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
# CONFIG_SSB_PCMCIAHOST is not set
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_SILENT=y
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
CONFIG_BCMA=m
CONFIG_BCMA_BLOCKIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=m
CONFIG_MFD_SM501=m
# CONFIG_MFD_SM501_GPIO is not set
CONFIG_HTC_PASIC3=m
# CONFIG_MFD_LM3533 is not set
CONFIG_TPS6105X=m
# CONFIG_TPS65010 is not set
CONFIG_TPS6507X=m
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_ARIZONA_I2C is not set
CONFIG_MFD_PCF50633=m
CONFIG_PCF50633_ADC=m
# CONFIG_PCF50633_GPIO is not set
CONFIG_MFD_MC13783=m
CONFIG_MFD_MC13XXX=m
CONFIG_MFD_MC13XXX_I2C=m
CONFIG_ABX500_CORE=y
# CONFIG_MFD_WL1273_CORE is not set
CONFIG_REGULATOR=y
CONFIG_REGULATOR_DEBUG=y
# CONFIG_REGULATOR_DUMMY is not set
CONFIG_REGULATOR_FIXED_VOLTAGE=m
CONFIG_REGULATOR_VIRTUAL_CONSUMER=m
CONFIG_REGULATOR_USERSPACE_CONSUMER=m
# CONFIG_REGULATOR_GPIO is not set
CONFIG_REGULATOR_AD5398=m
# CONFIG_REGULATOR_MC13783 is not set
# CONFIG_REGULATOR_MC13892 is not set
CONFIG_REGULATOR_ISL6271A=m
# CONFIG_REGULATOR_MAX1586 is not set
CONFIG_REGULATOR_MAX8649=m
CONFIG_REGULATOR_MAX8660=m
CONFIG_REGULATOR_MAX8952=m
CONFIG_REGULATOR_LP3971=m
# CONFIG_REGULATOR_LP3972 is not set
# CONFIG_REGULATOR_PCF50633 is not set
CONFIG_REGULATOR_TPS6105X=m
# CONFIG_REGULATOR_TPS62360 is not set
# CONFIG_REGULATOR_TPS65023 is not set
CONFIG_REGULATOR_TPS6507X=m
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_DRM=m
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=m
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=m
CONFIG_SOUND=m
# CONFIG_SOUND_OSS_CORE is not set
# CONFIG_SND is not set
# CONFIG_SOUND_PRIME is not set

#
# HID support
#
CONFIG_HID=m
# CONFIG_HIDRAW is not set
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB_ARCH_HAS_XHCI is not set
CONFIG_USB_SUPPORT=y
CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
CONFIG_MMC=m
CONFIG_MMC_DEBUG=y
CONFIG_MMC_UNSAFE_RESUME=y

#
# MMC/SD/SDIO Card Drivers
#
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
# CONFIG_MMC_BLOCK_BOUNCE is not set
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_SDHCI is not set
CONFIG_MEMSTICK=m
# CONFIG_MEMSTICK_DEBUG is not set

#
# MemoryStick drivers
#
CONFIG_MEMSTICK_UNSAFE_RESUME=y
# CONFIG_MSPRO_BLOCK is not set

#
# MemoryStick Host Controller Drivers
#
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
CONFIG_LEDS_GPIO=m
CONFIG_LEDS_LP3944=m
CONFIG_LEDS_LP5521=m
# CONFIG_LEDS_LP5523 is not set
CONFIG_LEDS_PCA955X=m
# CONFIG_LEDS_PCA9633 is not set
# CONFIG_LEDS_REGULATOR is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_LT3593 is not set
# CONFIG_LEDS_MC13783 is not set
CONFIG_LEDS_TCA6507=m
# CONFIG_LEDS_LM3556 is not set
CONFIG_LEDS_OT200=m
CONFIG_LEDS_TRIGGERS=y

#
# LED Triggers
#
CONFIG_LEDS_TRIGGER_TIMER=m
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_GPIO is not set
# CONFIG_LEDS_TRIGGER_DEFAULT_ON is not set

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
# CONFIG_ACCESSIBILITY is not set
# CONFIG_EDAC is not set
# CONFIG_RTC_CLASS is not set
CONFIG_DMADEVICES=y
CONFIG_DMADEVICES_DEBUG=y
CONFIG_DMADEVICES_VDEBUG=y

#
# DMA Devices
#
# CONFIG_TIMB_DMA is not set
CONFIG_DMA_ENGINE=y

#
# DMA Clients
#
# CONFIG_NET_DMA is not set
# CONFIG_ASYNC_TX_DMA is not set
CONFIG_DMATEST=m
CONFIG_AUXDISPLAY=y
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_UIO=m
CONFIG_UIO_PDRV=m
CONFIG_UIO_PDRV_GENIRQ=m
CONFIG_VIRTIO=m
CONFIG_VIRTIO_RING=m

#
# Virtio drivers
#
CONFIG_VIRTIO_BALLOON=m

#
# Microsoft Hyper-V guest support
#
CONFIG_STAGING=y
CONFIG_ECHO=m
CONFIG_COMEDI=m
# CONFIG_COMEDI_DEBUG is not set
CONFIG_COMEDI_DEFAULT_BUF_SIZE_KB=2048
CONFIG_COMEDI_DEFAULT_BUF_MAXSIZE_KB=20480
# CONFIG_COMEDI_MISC_DRIVERS is not set
# CONFIG_COMEDI_PCMCIA_DRIVERS is not set
CONFIG_COMEDI_8255=m
# CONFIG_PANEL is not set
CONFIG_RTLLIB=m
CONFIG_RTLLIB_CRYPTO_CCMP=m
# CONFIG_RTLLIB_CRYPTO_TKIP is not set
CONFIG_RTLLIB_CRYPTO_WEP=m
CONFIG_ZRAM=m
CONFIG_ZRAM_DEBUG=y
CONFIG_ZSMALLOC=m
# CONFIG_WLAGS49_H2 is not set
CONFIG_WLAGS49_H25=m
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
CONFIG_TOUCHSCREEN_CLEARPAD_TM1217=m
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
CONFIG_STAGING_MEDIA=y

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_PHONE is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_WIMAX_GDM72XX is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_SENSORS_HDAPS=m
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_SAMSUNG_Q10=m

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_SUPPORT=y

#
# Remoteproc drivers (EXPERIMENTAL)
#

#
# Rpmsg drivers (EXPERIMENTAL)
#
CONFIG_VIRT_DRIVERS=y
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
CONFIG_MEMORY=y
# CONFIG_IIO is not set
# CONFIG_PWM is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
CONFIG_EDD_OFF=y
# CONFIG_FIRMWARE_MEMMAP is not set
# CONFIG_DELL_RBU is not set
CONFIG_DCDBAS=m
CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
# CONFIG_EXT2_FS_XATTR is not set
CONFIG_EXT2_FS_XIP=y
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT23=y
# CONFIG_EXT4_FS_XATTR is not set
CONFIG_EXT4_DEBUG=y
CONFIG_FS_XIP=y
CONFIG_JBD2=m
CONFIG_JBD2_DEBUG=y
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
# CONFIG_JFS_SECURITY is not set
CONFIG_JFS_DEBUG=y
CONFIG_JFS_STATISTICS=y
# CONFIG_XFS_FS is not set
CONFIG_GFS2_FS=m
# CONFIG_OCFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
# CONFIG_QUOTA is not set
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=m
# CONFIG_FUSE_FS is not set

#
# Caches
#
CONFIG_FSCACHE=m
# CONFIG_FSCACHE_STATS is not set
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_FSCACHE_OBJECT_LIST=y
# CONFIG_CACHEFILES is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
# CONFIG_ZISOFS is not set
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=m
CONFIG_NLS_DEFAULT="iso8859-1"
# CONFIG_NLS_CODEPAGE_437 is not set
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
CONFIG_NLS_CODEPAGE_852=m
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
CONFIG_NLS_CODEPAGE_860=m
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
CONFIG_NLS_CODEPAGE_863=m
# CONFIG_NLS_CODEPAGE_864 is not set
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
CONFIG_NLS_CODEPAGE_950=m
# CONFIG_NLS_CODEPAGE_932 is not set
CONFIG_NLS_CODEPAGE_949=m
# CONFIG_NLS_CODEPAGE_874 is not set
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
# CONFIG_NLS_ISO8859_7 is not set
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_MAC_ROMAN=m
# CONFIG_NLS_MAC_CELTIC is not set
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
CONFIG_NLS_MAC_GREEK=m
# CONFIG_NLS_MAC_ICELAND is not set
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
# CONFIG_NLS_MAC_TURKISH is not set
# CONFIG_NLS_UTF8 is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
CONFIG_SLUB_STATS=y
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
# CONFIG_PROVE_RCU is not set
# CONFIG_SPARSE_RCU_POINTER is not set
CONFIG_LOCKDEP=y
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_LOCKDEP is not set
CONFIG_TRACE_IRQFLAGS=y
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
CONFIG_DEBUG_LOCKING_API_SELFTESTS=y
CONFIG_STACKTRACE=y
CONFIG_DEBUG_STACK_USAGE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_LIST=y
# CONFIG_TEST_LIST_SORT is not set
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
CONFIG_RCU_TRACE=y
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
CONFIG_CPU_NOTIFIER_ERROR_INJECT=m
# CONFIG_FAULT_INJECTION is not set
CONFIG_LATENCYTOP=y
CONFIG_DEBUG_PAGEALLOC=y
CONFIG_WANT_PAGE_DEBUG_FLAGS=y
CONFIG_PAGE_GUARD=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
# CONFIG_EVENT_POWER_TRACING_DEPRECATED is not set
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
# CONFIG_FUNCTION_GRAPH_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACE_BRANCH_PROFILING=y
# CONFIG_BRANCH_PROFILE_NONE is not set
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_PROFILE_ALL_BRANCHES=y
# CONFIG_BRANCH_TRACER is not set
CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
# CONFIG_UPROBE_EVENT is not set
# CONFIG_PROBE_EVENTS is not set
CONFIG_DYNAMIC_FTRACE=y
# CONFIG_FUNCTION_PROFILER is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_SELFTEST=y
CONFIG_FTRACE_STARTUP_TEST=y
# CONFIG_EVENT_TRACE_TEST_SYSCALLS is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DMA_API_DEBUG=y
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_TEST_KSTRTOX is not set
CONFIG_STRICT_DEVMEM=y
# CONFIG_X86_VERBOSE_BOOTUP is not set
# CONFIG_EARLY_PRINTK is not set
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_X86_PTDUMP=y
# CONFIG_DEBUG_RODATA is not set
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
# CONFIG_IO_DELAY_0X80 is not set
CONFIG_IO_DELAY_0XED=y
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=1
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
CONFIG_DEBUG_NMI_SELFTEST=y

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_TRUSTED_KEYS is not set
CONFIG_ENCRYPTED_KEYS=m
# CONFIG_KEYS_DEBUG_PROC_KEYS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_PATH=y
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
CONFIG_SECURITY_YAMA=y
CONFIG_INTEGRITY=y
# CONFIG_INTEGRITY_SIGNATURE is not set
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_AUDIT=y
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_YAMA=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="yama"
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=m
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=m
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_GF128MUL=m
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_ABLK_HELPER_X86=m
CONFIG_CRYPTO_GLUE_HELPER_X86=m

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
# CONFIG_CRYPTO_GCM is not set
CONFIG_CRYPTO_SEQIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=m
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_PCBC is not set
CONFIG_CRYPTO_XTS=m

#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
# CONFIG_CRYPTO_CRC32C_INTEL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
CONFIG_CRYPTO_RMD160=m
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
CONFIG_CRYPTO_SHA256=m
# CONFIG_CRYPTO_SHA512 is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=m
CONFIG_CRYPTO_AES_X86_64=m
# CONFIG_CRYPTO_AES_NI_INTEL is not set
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
# CONFIG_CRYPTO_CAMELLIA is not set
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
# CONFIG_CRYPTO_DES is not set
# CONFIG_CRYPTO_FCRYPT is not set
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
CONFIG_CRYPTO_TEA=m
# CONFIG_CRYPTO_TWOFISH is not set
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_ZLIB is not set
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRYPTO_USER_API=m
# CONFIG_CRYPTO_USER_API_HASH is not set
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
# CONFIG_CRC_T10DIF is not set
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=y
CONFIG_CRC32_SELFTEST=y
# CONFIG_CRC32_SLICEBY8 is not set
CONFIG_CRC32_SLICEBY4=y
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
CONFIG_CRC8=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
# CONFIG_XZ_DEC is not set
# CONFIG_XZ_DEC_BCJ is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_AVERAGE=y
CONFIG_CORDIC=m
CONFIG_DDR=y

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 13:06       ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-12 13:06 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

[-- Attachment #1: Type: message/external-body, Size: 509 bytes --]

[-- Attachment #2: dmesg-kvm-slim-4225-2012-07-12-19-15-31 --]
[-- Type: text/plain, Size: 28151 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-08414-g9645fff (kbuild@snb) (gcc version 4.7.0 (Debian 4.7.0-11) ) #15 SMP Thu Jul 12 19:12:36 CST 2012
[    0.000000] Command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000093bff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000093c00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000fdac0-0x000fdacf] mapped at [ffff8800000fdac0]
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff88000008d000] 8d000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 127940(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c6ce01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x00092fff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65408
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3901 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 0000000000093000 - 0000000000094000
[    0.000000] PM: Registered nosave memory: 0000000000094000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfeffbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64378
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 199892k/262132k available (4847k kernel code, 500k absent, 61740k reserved, 7791k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 2999.462 MHz processor
[    0.001999] Calibrating delay loop (skipped) preset value.. 5998.92 BogoMIPS (lpj=2999462)
[    0.003010] pid_max: default: 32768 minimum: 301
[    0.005213] Security Framework initialized
[    0.006075] Yama: becoming mindful.
[    0.008740] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.011850] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.014061] Mount-cache hash table entries: 256
[    0.018011] Initializing cgroup subsys debug
[    0.019012] Initializing cgroup subsys freezer
[    0.020010] Initializing cgroup subsys perf_event
[    0.021165] Disabled fast string operations
[    0.024612] ftrace: allocating 11013 entries in 44 pages
[    0.033344] Getting VERSION: 50014
[    0.034015] Getting VERSION: 50014
[    0.035014] Getting ID: 0
[    0.035731] Getting ID: ff000000
[    0.036014] Getting LVT0: 8700
[    0.037011] Getting LVT1: 8400
[    0.038084] enabled ExtINT on CPU#0
[    0.040907] ENABLING IO-APIC IRQs
[    0.041011] init IO_APIC IRQs
[    0.042007]  apic 2 pin 0 not connected
[    0.043041] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.045035] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.047032] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.049046] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.051027] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.053027] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.055027] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.057026] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.059037] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.062029] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.064029] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.066023] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.068026] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.070025] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.073004] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.075020]  apic 2 pin 16 not connected
[    0.075999]  apic 2 pin 17 not connected
[    0.076999]  apic 2 pin 18 not connected
[    0.077999]  apic 2 pin 19 not connected
[    0.078999]  apic 2 pin 20 not connected
[    0.079999]  apic 2 pin 21 not connected
[    0.080999]  apic 2 pin 22 not connected
[    0.081999]  apic 2 pin 23 not connected
[    0.083158] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.084998] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.087425] Using local APIC timer interrupts.
[    0.087425] calibrating APIC timer ...
[    0.090992] ... lapic delta = 6249032
[    0.090992] ..... delta 6249032
[    0.090992] ..... mult: 268434682
[    0.090992] ..... calibration result: 999845
[    0.090992] ..... CPU clock speed is 2998.0997 MHz.
[    0.090992] ..... host bus clock speed is 999.0845 MHz.
[    0.090992] ... verify APIC timer
[    0.201346] ... jiffies delta = 100
[    0.201984] ... jiffies result ok
[    0.203030] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.207035] ------------[ cut here ]------------
[    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
[    0.207977] Modules linked in:
[    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
[    0.207977] Call Trace:
[    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
[    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
[    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
[    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
[    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
[    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
[    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
[    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
[    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
[    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
[    0.207977] ---[ end trace 5eb91373aeac2b15 ]---
[    0.210519] Testing tracer nop: PASSED
[    0.212314] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.215909] SMP alternatives: lockdep: fixing up alternatives
[    0.216992] smpboot: Booting Node   0, Processors  #1 OK
[    0.001999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.001999] masked ExtINT on CPU#1
[    0.001999] Disabled fast string operations
[    0.233973] TSC synchronization [CPU#0 -> CPU#1]:
[    0.233973] Measured 1551 cycles TSC warp between CPUs, turning off TSC clock.
[    0.233973] tsc: Marking TSC unstable due to check_tsc_sync_source failed
[    0.244338] Brought up 2 CPUs
[    0.244988] ----------------
[    0.245746] | NMI testsuite:
[    0.245976] --------------------
[    0.246976]   remote IPI:  ok  |
[    0.251287]    local IPI:  ok  |
[    0.256982] --------------------
[    0.257844] Good, all   2 testcases passed! |
[    0.258974] ---------------------------------
[    0.259976] smpboot: Total of 2 processors activated (11997.84 BogoMIPS)
[    0.262415] CPU0 attaching sched-domain:
[    0.262979]  domain 0: span 0-1 level CPU
[    0.264443]   groups: 0 (cpu_power = 1023) 1
[    0.265676] CPU1 attaching sched-domain:
[    0.265976]  domain 0: span 0-1 level CPU
[    0.267973]   groups: 1 0 (cpu_power = 1023)
[    0.277762] devtmpfs: initialized
[    0.278040] device: 'platform': device_add
[    0.279040] PM: Adding info for No Bus:platform
[    0.281100] bus: 'platform': registered
[    0.282097] bus: 'cpu': registered
[    0.282977] device: 'cpu': device_add
[    0.288670] PM: Adding info for No Bus:cpu
[    0.289057] bus: 'memory': registered
[    0.289975] device: 'memory': device_add
[    0.290996] PM: Adding info for No Bus:memory
[    0.293022] device: 'memory0': device_add
[    0.294004] bus: 'memory': add device memory0
[    0.301519] PM: Adding info for memory:memory0
[    0.302133] device: 'memory1': device_add
[    0.302977] bus: 'memory': add device memory1
[    0.304997] PM: Adding info for memory:memory1
[    0.322930] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.323973] device class 'regulator': registering
[    0.326225] Registering platform device 'reg-dummy'. Parent at platform
[    0.335503] device: 'reg-dummy': device_add
[    0.335986] bus: 'platform': add device reg-dummy
[    0.337979] PM: Adding info for platform:reg-dummy
[    0.339011] bus: 'platform': add driver reg-dummy
[    0.339974] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.341966] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.352600] device: 'regulator.0': device_add
[    0.353991] PM: Adding info for No Bus:regulator.0
[    0.355105] dummy: 
[    0.356032] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.357006] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.365639] RTC time: 11:15:27, date: 07/12/12
[    0.367178] NET: Registered protocol family 16
[    0.368221] device class 'bdi': registering
[    0.369003] device class 'tty': registering
[    0.370005] bus: 'node': registered
[    0.370963] device: 'node': device_add
[    0.378514] PM: Adding info for No Bus:node
[    0.379975] device class 'dma': registering
[    0.381072] device: 'node0': device_add
[    0.381964] bus: 'node': add device node0
[    0.382983] PM: Adding info for node:node0
[    0.384059] device: 'cpu0': device_add
[    0.391500] bus: 'cpu': add device cpu0
[    0.391982] PM: Adding info for cpu:cpu0
[    0.393007] device: 'cpu1': device_add
[    0.394015] bus: 'cpu': add device cpu1
[    0.394982] PM: Adding info for cpu:cpu1
[    0.395990] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.397953] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.399953] mtrr: probably your BIOS does not setup all CPUs.
[    0.400953] mtrr: corrected configuration.
[    0.414025] device: 'default': device_add
[    0.415055] PM: Adding info for No Bus:default
[    0.418486] bio: create slab <bio-0> at 0
[    0.419082] device class 'block': registering
[    0.421070] device class 'misc': registering
[    0.422218] bus: 'serio': registered
[    0.422962] device class 'input': registering
[    0.426047] device class 'power_supply': registering
[    0.426983] device class 'watchdog': registering
[    0.428039] device class 'net': registering
[    0.430169] device: 'lo': device_add
[    0.431185] PM: Adding info for No Bus:lo
[    0.431604] Switching to clocksource kvm-clock
[    0.436812] Warning: could not register all branches stats
[    0.438281] Warning: could not register annotated branches stats
[    0.561660] device class 'mem': registering
[    0.562848] device: 'mem': device_add
[    0.564244] PM: Adding info for No Bus:mem
[    0.565406] device: 'kmem': device_add
[    0.566698] PM: Adding info for No Bus:kmem
[    0.567942] device: 'null': device_add
[    0.569141] PM: Adding info for No Bus:null
[    0.570280] device: 'zero': device_add
[    0.571499] PM: Adding info for No Bus:zero
[    0.572649] device: 'full': device_add
[    0.573805] PM: Adding info for No Bus:full
[    0.574929] device: 'random': device_add
[    0.576239] PM: Adding info for No Bus:random
[    0.577487] device: 'urandom': device_add
[    0.578784] PM: Adding info for No Bus:urandom
[    0.579994] device: 'kmsg': device_add
[    0.581186] PM: Adding info for No Bus:kmsg
[    0.582333] device: 'tty': device_add
[    0.583552] PM: Adding info for No Bus:tty
[    0.584866] device: 'console': device_add
[    0.586191] PM: Adding info for No Bus:console
[    0.587491] NET: Registered protocol family 1
[    0.589321] Unpacking initramfs...
[    2.786882] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    2.871676] DMA-API: preallocated 32768 debug entries
[    2.873030] DMA-API: debugging enabled by kernel config
[    2.874668] Registering platform device 'rtc_cmos'. Parent at platform
[    2.876377] device: 'rtc_cmos': device_add
[    2.877481] bus: 'platform': add device rtc_cmos
[    2.878840] PM: Adding info for platform:rtc_cmos
[    2.880110] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    2.882497] device: 'snapshot': device_add
[    2.883820] PM: Adding info for No Bus:snapshot
[    2.885128] bus: 'clocksource': registered
[    2.886236] device: 'clocksource': device_add
[    2.887415] PM: Adding info for No Bus:clocksource
[    2.888714] device: 'clocksource0': device_add
[    2.889895] bus: 'clocksource': add device clocksource0
[    2.891313] PM: Adding info for clocksource:clocksource0
[    2.892734] bus: 'platform': add driver alarmtimer
[    2.894050] Registering platform device 'alarmtimer'. Parent at platform
[    2.895808] device: 'alarmtimer': device_add
[    2.896943] bus: 'platform': add device alarmtimer
[    2.898261] PM: Adding info for platform:alarmtimer
[    2.899553] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    2.901860] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    2.904029] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    2.905872] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    2.908139] audit: initializing netlink socket (disabled)
[    2.909625] type=2000 audit(1342091729.908:1): initialized
[    2.923090] Testing tracer function: PASSED
[    3.083849] Testing dynamic ftrace: PASSED
[    3.347420] Testing dynamic ftrace ops #1: [    3.374759] kwatchdog (24) used greatest stack depth: 6584 bytes left
(1 0 1 1 0) (1 1 2 1 0) 

[-- Attachment #3: config-3.5.0-rc6-08414-g9645fff --]
[-- Type: text/plain, Size: 50953 bytes --]

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.5.0-rc6 Kernel Configuration
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
# CONFIG_GENERIC_ISA_DMA is not set
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_GPIO=y
# CONFIG_ARCH_MAY_HAVE_PC_FDC is not set
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_CONSTRUCTORS=y
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
# CONFIG_EXPERIMENTAL is not set
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
# CONFIG_KERNEL_GZIP is not set
CONFIG_KERNEL_BZIP2=y
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_FHANDLE=y
CONFIG_TASKSTATS=y
# CONFIG_TASK_DELAY_ACCT is not set
# CONFIG_TASK_XACCT is not set
CONFIG_AUDIT=y
# CONFIG_AUDITSYSCALL is not set
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
# CONFIG_IRQ_DOMAIN_DEBUG is not set
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
CONFIG_TREE_RCU_TRACE=y
CONFIG_IKCONFIG=y
# CONFIG_IKCONFIG_PROC is not set
CONFIG_LOG_BUF_SHIFT=17
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
# CONFIG_CGROUP_CPUACCT is not set
# CONFIG_RESOURCE_COUNTERS is not set
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
# CONFIG_NAMESPACES is not set
CONFIG_SCHED_AUTOGROUP=y
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
# CONFIG_RD_XZ is not set
CONFIG_RD_LZO=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_EXPERT=y
# CONFIG_UID16 is not set
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
# CONFIG_EPOLL is not set
# CONFIG_SIGNALFD is not set
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
# CONFIG_SHMEM is not set
CONFIG_AIO=y
CONFIG_EMBEDDED=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_PERF_USE_VMALLOC=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
CONFIG_DEBUG_PERF_USE_VMALLOC=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y

#
# GCOV-based kernel profiling
#
CONFIG_GCOV_KERNEL=y
CONFIG_GCOV_PROFILE_ALL=y
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
# CONFIG_MODULE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
# CONFIG_SGI_PARTITION is not set
CONFIG_ULTRIX_PARTITION=y
# CONFIG_SUN_PARTITION is not set
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=m
# CONFIG_IOSCHED_CFQ is not set
CONFIG_DEFAULT_NOOP=y
CONFIG_DEFAULT_IOSCHED="noop"
# CONFIG_INLINE_SPIN_TRYLOCK is not set
# CONFIG_INLINE_SPIN_TRYLOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK is not set
# CONFIG_INLINE_SPIN_LOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK_IRQ is not set
# CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set
CONFIG_UNINLINE_SPIN_UNLOCK=y
# CONFIG_INLINE_SPIN_UNLOCK_BH is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQ is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_READ_TRYLOCK is not set
# CONFIG_INLINE_READ_LOCK is not set
# CONFIG_INLINE_READ_LOCK_BH is not set
# CONFIG_INLINE_READ_LOCK_IRQ is not set
# CONFIG_INLINE_READ_LOCK_IRQSAVE is not set
# CONFIG_INLINE_READ_UNLOCK is not set
# CONFIG_INLINE_READ_UNLOCK_BH is not set
# CONFIG_INLINE_READ_UNLOCK_IRQ is not set
# CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_WRITE_TRYLOCK is not set
# CONFIG_INLINE_WRITE_LOCK is not set
# CONFIG_INLINE_WRITE_LOCK_BH is not set
# CONFIG_INLINE_WRITE_LOCK_IRQ is not set
# CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set
# CONFIG_INLINE_WRITE_UNLOCK is not set
# CONFIG_INLINE_WRITE_UNLOCK_BH is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQ is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set
# CONFIG_MUTEX_SPIN_ON_OWNER is not set
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
# CONFIG_KVMTOOL_TEST_ENABLE is not set
CONFIG_PARAVIRT_GUEST=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
# CONFIG_XEN is not set
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_KVM_CLOCK=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_XADD=y
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
# CONFIG_CPU_SUP_AMD is not set
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
# CONFIG_DMI is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set
# CONFIG_X86_MCE is not set
# CONFIG_I8K is not set
# CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
# CONFIG_DIRECT_GBPAGES is not set
CONFIG_NUMA=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
# CONFIG_SPARSEMEM_VMEMMAP is not set
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=999999
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_CLEANCACHE=y
# CONFIG_FRONTSWAP is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
# CONFIG_X86_PAT is not set
# CONFIG_ARCH_RANDOM is not set
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_AUTOSLEEP=y
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
# CONFIG_INTEL_IDLE is not set

#
# Memory power savings
#

#
# Bus options (PCI etc.)
#
# CONFIG_PCI is not set
# CONFIG_ARCH_SUPPORTS_MSI is not set
# CONFIG_ISA_DMA_API is not set
CONFIG_PCCARD=m
CONFIG_PCMCIA=m

#
# PC-card bridges
#

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
# CONFIG_BINFMT_MISC is not set
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y

#
# Networking options
#
CONFIG_PACKET=m
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
# CONFIG_NET_KEY is not set
# CONFIG_INET is not set
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETFILTER is not set
CONFIG_ATM=m
# CONFIG_ATM_LANE is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_PHONET is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
CONFIG_NET_SCH_HTB=m
# CONFIG_NET_SCH_HFSC is not set
CONFIG_NET_SCH_ATM=m
# CONFIG_NET_SCH_PRIO is not set
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
# CONFIG_NET_SCH_SFB is not set
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
CONFIG_NET_SCH_DSMARK=m
# CONFIG_NET_SCH_NETEM is not set
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
# CONFIG_NET_SCH_CODEL is not set
CONFIG_NET_SCH_FQ_CODEL=m
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
# CONFIG_CLS_U32_PERF is not set
CONFIG_CLS_U32_MARK=y
# CONFIG_NET_CLS_RSVP is not set
CONFIG_NET_CLS_RSVP6=m
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
# CONFIG_NET_EMATCH_TEXT is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_BQL=y
CONFIG_BPF_JIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_HAMRADIO=y

#
# Packet Radio protocols
#
# CONFIG_AX25 is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_SPY=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REG_DEBUG=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_DEBUGFS is not set
# CONFIG_CFG80211_INTERNAL_REGDB is not set
CONFIG_CFG80211_WEXT=y
CONFIG_LIB80211=m
CONFIG_LIB80211_DEBUG=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_PID=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_MINSTREL_HT=y
# CONFIG_MAC80211_RC_DEFAULT_PID is not set
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_WIMAX=m
CONFIG_WIMAX_DEBUG_LEVEL=8
# CONFIG_RFKILL is not set
CONFIG_RFKILL_REGULATOR=m
CONFIG_NET_9P=m
# CONFIG_NET_9P_VIRTIO is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
# CONFIG_DEVTMPFS_MOUNT is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=m
# CONFIG_FIRMWARE_IN_KERNEL is not set
CONFIG_EXTRA_FIRMWARE=""
CONFIG_DEBUG_DRIVER=y
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_CONNECTOR is not set
# CONFIG_MTD is not set
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_PC_PCMCIA=m
# CONFIG_PARPORT_GSC is not set
CONFIG_PARPORT_AX88796=m
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_BLK_DEV=y
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
# CONFIG_BLK_DEV_LOOP is not set

#
# DRBD disabled because PROC_FS, INET or CONNECTOR not selected
#
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_BLK_DEV_HD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
CONFIG_ENCLOSURE_SERVICES=m
# CONFIG_APDS9802ALS is not set
CONFIG_ISL29003=m
CONFIG_ISL29020=m
# CONFIG_SENSORS_TSL2550 is not set
CONFIG_SENSORS_BH1780=m
CONFIG_SENSORS_BH1770=m
# CONFIG_SENSORS_APDS990X is not set
CONFIG_HMC6352=m
# CONFIG_VMWARE_BALLOON is not set
CONFIG_BMP085=y
CONFIG_BMP085_I2C=m
# CONFIG_USB_SWITCH_FSA9480 is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
CONFIG_EEPROM_LEGACY=m
# CONFIG_EEPROM_93CX6 is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
CONFIG_ALTERA_STAPL=m
CONFIG_HAVE_IDE=y
CONFIG_IDE=m

#
# Please see Documentation/ide/ide.txt for help/info on IDE drives
#
CONFIG_IDE_XFER_MODE=y
CONFIG_IDE_TIMINGS=y
CONFIG_IDE_ATAPI=y
# CONFIG_BLK_DEV_IDE_SATA is not set
# CONFIG_IDE_GD is not set
# CONFIG_BLK_DEV_IDECS is not set
# CONFIG_BLK_DEV_IDECD is not set
CONFIG_BLK_DEV_IDETAPE=m
CONFIG_IDE_TASK_IOCTL=y
CONFIG_IDE_PROC_FS=y

#
# IDE chipset support/bugfixes
#
# CONFIG_IDE_GENERIC is not set
# CONFIG_BLK_DEV_PLATFORM is not set
CONFIG_BLK_DEV_CMD640=m
CONFIG_BLK_DEV_CMD640_ENHANCED=y
# CONFIG_BLK_DEV_IDEDMA is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_NETLINK is not set
# CONFIG_SCSI_PROC_FS is not set

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
# CONFIG_SCSI_ENCLOSURE is not set
CONFIG_SCSI_MULTI_LUN=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
# CONFIG_SCSI_SCAN_ASYNC is not set

#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
# CONFIG_SCSI_SAS_HOST_SMP is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_SCSI_LOWLEVEL_PCMCIA=y
# CONFIG_PCMCIA_AHA152X is not set
# CONFIG_PCMCIA_FDOMAIN is not set
CONFIG_PCMCIA_QLOGIC=m
# CONFIG_PCMCIA_SYM53C500 is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_SATA_MV=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ARASAN_CF=m

#
# PIO-only SFF controllers
#
CONFIG_PATA_PCMCIA=m
CONFIG_PATA_PLATFORM=m

#
# Generic fallback / legacy drivers
#
CONFIG_MD=y
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
# CONFIG_DM_CRYPT is not set
# CONFIG_DM_SNAPSHOT is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_UEVENT is not set
CONFIG_TARGET_CORE=m
# CONFIG_TCM_IBLOCK is not set
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
# CONFIG_ISCSI_TARGET is not set
CONFIG_MACINTOSH_DRIVERS=y
# CONFIG_MAC_EMUMOUSEBTN is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_DUMMY=m
CONFIG_EQUALIZER=m
CONFIG_MII=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_TUN is not set
CONFIG_VETH=m
CONFIG_ARCNET=m
CONFIG_ARCNET_1201=m
CONFIG_ARCNET_1051=m
# CONFIG_ARCNET_RAW is not set
# CONFIG_ARCNET_CAP is not set
CONFIG_ARCNET_COM90xx=m
# CONFIG_ARCNET_COM90xxIO is not set
# CONFIG_ARCNET_RIM_I is not set
CONFIG_ARCNET_COM20020=m
CONFIG_ARCNET_COM20020_CS=m
CONFIG_ATM_DRIVERS=y
CONFIG_ATM_DUMMY=m

#
# CAIF transport drivers
#
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_AMD=y
CONFIG_PCMCIA_NMCLAN=m
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
CONFIG_NET_VENDOR_NATSEMI=y
CONFIG_NET_VENDOR_8390=y
CONFIG_PCMCIA_AXNET=m
CONFIG_PCMCIA_PCNET=m
CONFIG_ETHOC=m
# CONFIG_NET_VENDOR_REALTEK is not set
CONFIG_NET_VENDOR_SMSC=y
CONFIG_PCMCIA_SMC91C92=m
CONFIG_NET_VENDOR_STMICRO=y
CONFIG_STMMAC_ETH=m
# CONFIG_STMMAC_PLATFORM is not set
CONFIG_STMMAC_DEBUG_FS=y
# CONFIG_STMMAC_DA is not set
# CONFIG_STMMAC_RING is not set
CONFIG_STMMAC_CHAINED=y
CONFIG_NET_VENDOR_WIZNET=y
CONFIG_WIZNET_W5100=m
CONFIG_WIZNET_W5300=m
# CONFIG_WIZNET_BUS_DIRECT is not set
# CONFIG_WIZNET_BUS_INDIRECT is not set
CONFIG_WIZNET_BUS_ANY=y
# CONFIG_NET_VENDOR_XIRCOM is not set
CONFIG_PHYLIB=m

#
# MII PHY device drivers
#
CONFIG_AMD_PHY=m
# CONFIG_MARVELL_PHY is not set
# CONFIG_DAVICOM_PHY is not set
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
# CONFIG_SMSC_PHY is not set
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
CONFIG_STE10XP=m
# CONFIG_LSI_ET1011C_PHY is not set
CONFIG_MICREL_PHY=m
CONFIG_MDIO_BITBANG=m
CONFIG_MDIO_GPIO=m
# CONFIG_PLIP is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
# CONFIG_PPP_FILTER is not set
# CONFIG_PPPOATM is not set
# CONFIG_PPP_ASYNC is not set
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
# CONFIG_SLIP_COMPRESSED is not set
# CONFIG_SLIP_SMART is not set
# CONFIG_SLIP_MODE_SLIP6 is not set
CONFIG_WLAN=y
# CONFIG_PCMCIA_RAYCS is not set
CONFIG_LIBERTAS_THINFIRM=m
CONFIG_LIBERTAS_THINFIRM_DEBUG=y
# CONFIG_ATMEL is not set
CONFIG_AIRO_CS=m
# CONFIG_MAC80211_HWSIM is not set
CONFIG_ATH_COMMON=m
# CONFIG_ATH_DEBUG is not set
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
# CONFIG_ATH9K_BTCOEX_SUPPORT is not set
CONFIG_ATH9K=m
CONFIG_ATH9K_AHB=y
CONFIG_ATH9K_DEBUGFS=y
# CONFIG_ATH9K_DFS_CERTIFIED is not set
CONFIG_ATH9K_MAC_DEBUG=y
# CONFIG_ATH9K_RATE_CONTROL is not set
# CONFIG_ATH6KL is not set
CONFIG_B43=m
CONFIG_B43_BCMA=y
# CONFIG_B43_BCMA_EXTRA is not set
CONFIG_B43_SSB=y
# CONFIG_B43_PCMCIA is not set
CONFIG_B43_BCMA_PIO=y
CONFIG_B43_PIO=y
# CONFIG_B43_PHY_LP is not set
CONFIG_B43_LEDS=y
CONFIG_B43_HWRNG=y
# CONFIG_B43_DEBUG is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
# CONFIG_B43LEGACY_DEBUG is not set
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
CONFIG_BRCMUTIL=m
CONFIG_BRCMSMAC=m
# CONFIG_BRCMFMAC is not set
# CONFIG_BRCMDBG is not set
# CONFIG_HOSTAP is not set
# CONFIG_LIBERTAS is not set
# CONFIG_HERMES is not set
CONFIG_RT2X00=m
CONFIG_WL_TI=y
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
# CONFIG_MWIFIEX is not set

#
# WiMAX Wireless Broadband devices
#

#
# Enable USB support to see WiMAX USB drivers
#
CONFIG_WAN=y
CONFIG_HDLC=m
# CONFIG_HDLC_RAW is not set
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
# CONFIG_HDLC_FR is not set
# CONFIG_HDLC_PPP is not set

#
# X.25/LAPB support is disabled
#
# CONFIG_DLCI is not set
# CONFIG_SBNI is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
CONFIG_INPUT_MATRIXKMAP=m

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=m
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=m
CONFIG_INPUT_EVBUG=m

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ADP5588=m
CONFIG_KEYBOARD_ADP5589=m
CONFIG_KEYBOARD_ATKBD=y
CONFIG_KEYBOARD_QT1070=m
# CONFIG_KEYBOARD_LKKBD is not set
CONFIG_KEYBOARD_GPIO=m
CONFIG_KEYBOARD_GPIO_POLLED=m
CONFIG_KEYBOARD_TCA6416=m
CONFIG_KEYBOARD_TCA8418=m
CONFIG_KEYBOARD_MATRIX=m
CONFIG_KEYBOARD_LM8323=m
# CONFIG_KEYBOARD_LM8333 is not set
CONFIG_KEYBOARD_MAX7359=m
CONFIG_KEYBOARD_MCS=m
# CONFIG_KEYBOARD_MPR121 is not set
CONFIG_KEYBOARD_NEWTON=m
CONFIG_KEYBOARD_OPENCORES=m
CONFIG_KEYBOARD_STOWAWAY=m
# CONFIG_KEYBOARD_SUNKBD is not set
CONFIG_KEYBOARD_OMAP4=m
CONFIG_KEYBOARD_XTKBD=m
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
CONFIG_JOYSTICK_ANALOG=m
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
CONFIG_JOYSTICK_GF2K=m
CONFIG_JOYSTICK_GRIP=m
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
CONFIG_JOYSTICK_INTERACT=m
# CONFIG_JOYSTICK_SIDEWINDER is not set
CONFIG_JOYSTICK_TMDC=m
CONFIG_JOYSTICK_IFORCE=m
CONFIG_JOYSTICK_IFORCE_232=y
CONFIG_JOYSTICK_WARRIOR=m
CONFIG_JOYSTICK_MAGELLAN=m
CONFIG_JOYSTICK_SPACEORB=m
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
CONFIG_JOYSTICK_TWIDJOY=m
CONFIG_JOYSTICK_ZHENHUA=m
CONFIG_JOYSTICK_DB9=m
CONFIG_JOYSTICK_GAMECON=m
# CONFIG_JOYSTICK_TURBOGRAFX is not set
CONFIG_JOYSTICK_AS5011=m
CONFIG_JOYSTICK_JOYDUMP=m
# CONFIG_JOYSTICK_XPAD is not set
# CONFIG_JOYSTICK_WALKERA0701 is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_CT82C710=m
# CONFIG_SERIO_PARKBD is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_GAMEPORT=m
# CONFIG_GAMEPORT_NS558 is not set
# CONFIG_GAMEPORT_L4 is not set

#
# Character devices
#
# CONFIG_VT is not set
# CONFIG_UNIX98_PTYS is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_TRACE_ROUTER is not set
CONFIG_TRACE_SINK=m
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set

#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
CONFIG_SERIAL_ALTERA_UART=m
CONFIG_SERIAL_ALTERA_UART_MAXPORTS=4
CONFIG_SERIAL_ALTERA_UART_BAUDRATE=115200
# CONFIG_SERIAL_XILINX_PS_UART is not set
CONFIG_TTY_PRINTK=y
# CONFIG_PRINTER is not set
CONFIG_PPDEV=m
CONFIG_HVC_DRIVER=y
CONFIG_VIRTIO_CONSOLE=m
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
CONFIG_HW_RANDOM_TIMERIOMEM=m
# CONFIG_HW_RANDOM_VIA is not set
# CONFIG_HW_RANDOM_VIRTIO is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
# CONFIG_GEN_RTC is not set
CONFIG_R3964=m

#
# PCMCIA character devices
#
CONFIG_SYNCLINK_CS=m
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
CONFIG_IPWIRELESS=m
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=m
CONFIG_MAX_RAW_DEVS=256
CONFIG_HANGCHECK_TIMER=m
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_NSC is not set
# CONFIG_TCG_ATMEL is not set
CONFIG_I2C=m
CONFIG_I2C_BOARDINFO=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
CONFIG_I2C_MUX_GPIO=m
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=m

#
# I2C Hardware Bus support
#

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
CONFIG_I2C_GPIO=m
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
CONFIG_I2C_SIMTEC=m

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PARPORT_LIGHT is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
CONFIG_I2C_DEBUG_BUS=y
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#

#
# PPS generators support
#

#
# PTP clock support
#

#
# Enable Device Drivers -> PPS to see the PTP clock options.
#
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIOLIB=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_MAX730X=m

#
# Memory mapped GPIO drivers:
#
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_IT8761E=m

#
# I2C GPIO expanders:
#
CONFIG_GPIO_MAX7300=m
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_ADP5588 is not set

#
# PCI GPIO expanders:
#

#
# SPI GPIO expanders:
#
CONFIG_GPIO_MCP23S08=m

#
# AC97 GPIO expanders:
#

#
# MODULbus GPIO expanders:
#
CONFIG_W1=m

#
# 1-wire Bus Masters
#
# CONFIG_W1_MASTER_DS1WM is not set
# CONFIG_W1_MASTER_GPIO is not set

#
# 1-wire Slaves
#
CONFIG_W1_SLAVE_THERM=m
# CONFIG_W1_SLAVE_SMEM is not set
CONFIG_W1_SLAVE_DS2408=m
# CONFIG_W1_SLAVE_DS2423 is not set
CONFIG_W1_SLAVE_DS2431=m
CONFIG_W1_SLAVE_DS2433=m
# CONFIG_W1_SLAVE_DS2433_CRC is not set
# CONFIG_W1_SLAVE_DS2760 is not set
CONFIG_W1_SLAVE_DS2780=m
CONFIG_W1_SLAVE_DS2781=m
# CONFIG_W1_SLAVE_DS28E04 is not set
CONFIG_W1_SLAVE_BQ27000=m
CONFIG_POWER_SUPPLY=y
CONFIG_POWER_SUPPLY_DEBUG=y
# CONFIG_PDA_POWER is not set
CONFIG_TEST_POWER=m
CONFIG_BATTERY_DS2780=m
CONFIG_BATTERY_DS2781=m
CONFIG_BATTERY_DS2782=m
CONFIG_BATTERY_SBS=m
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
CONFIG_CHARGER_PCF50633=m
# CONFIG_CHARGER_MAX8903 is not set
CONFIG_CHARGER_LP8727=m
CONFIG_CHARGER_GPIO=m
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=m
CONFIG_HWMON_VID=m
CONFIG_HWMON_DEBUG_CHIP=y

#
# Native drivers
#
CONFIG_SENSORS_ADM1021=m
# CONFIG_SENSORS_ADM1025 is not set
CONFIG_SENSORS_ADM1026=m
# CONFIG_SENSORS_ADM1029 is not set
CONFIG_SENSORS_ADM1031=m
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7475=m
CONFIG_SENSORS_ASC7621=m
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
CONFIG_SENSORS_FSCHMD=m
CONFIG_SENSORS_G760A=m
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
# CONFIG_SENSORS_GPIO_FAN is not set
# CONFIG_SENSORS_IT87 is not set
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_LM63 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
# CONFIG_SENSORS_LM78 is not set
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
# CONFIG_SENSORS_LM85 is not set
CONFIG_SENSORS_LM87=m
# CONFIG_SENSORS_LM90 is not set
CONFIG_SENSORS_LM92=m
# CONFIG_SENSORS_LM93 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
# CONFIG_SENSORS_PC87360 is not set
CONFIG_SENSORS_PC87427=m
# CONFIG_SENSORS_PCF8591 is not set
CONFIG_SENSORS_SHT15=m
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SCH56XX_COMMON=m
# CONFIG_SENSORS_SCH5627 is not set
CONFIG_SENSORS_SCH5636=m
CONFIG_SENSORS_ADS1015=m
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_THMC50 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VT1211=m
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
CONFIG_SENSORS_W83792D=m
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
CONFIG_SENSORS_APPLESMC=m
# CONFIG_SENSORS_MC13783_ADC is not set
CONFIG_THERMAL=m
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_WATCHDOG_NOWAYOUT=y

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
CONFIG_ADVANTECH_WDT=m
CONFIG_SC520_WDT=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
CONFIG_IBMASR=m
CONFIG_WAFER_WDT=m
# CONFIG_IT8712F_WDT is not set
CONFIG_SC1200_WDT=m
CONFIG_PC87413_WDT=m
# CONFIG_60XX_WDT is not set
CONFIG_SBC8360_WDT=m
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
CONFIG_W83697UG_WDT=m
CONFIG_W83877F_WDT=m
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
CONFIG_SSB=m
CONFIG_SSB_BLOCKIO=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
# CONFIG_SSB_PCMCIAHOST is not set
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_SILENT=y
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
CONFIG_BCMA=m
CONFIG_BCMA_BLOCKIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=m
CONFIG_MFD_SM501=m
# CONFIG_MFD_SM501_GPIO is not set
CONFIG_HTC_PASIC3=m
# CONFIG_MFD_LM3533 is not set
CONFIG_TPS6105X=m
# CONFIG_TPS65010 is not set
CONFIG_TPS6507X=m
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_ARIZONA_I2C is not set
CONFIG_MFD_PCF50633=m
CONFIG_PCF50633_ADC=m
# CONFIG_PCF50633_GPIO is not set
CONFIG_MFD_MC13783=m
CONFIG_MFD_MC13XXX=m
CONFIG_MFD_MC13XXX_I2C=m
CONFIG_ABX500_CORE=y
# CONFIG_MFD_WL1273_CORE is not set
CONFIG_REGULATOR=y
CONFIG_REGULATOR_DEBUG=y
# CONFIG_REGULATOR_DUMMY is not set
CONFIG_REGULATOR_FIXED_VOLTAGE=m
CONFIG_REGULATOR_VIRTUAL_CONSUMER=m
CONFIG_REGULATOR_USERSPACE_CONSUMER=m
# CONFIG_REGULATOR_GPIO is not set
CONFIG_REGULATOR_AD5398=m
# CONFIG_REGULATOR_MC13783 is not set
# CONFIG_REGULATOR_MC13892 is not set
CONFIG_REGULATOR_ISL6271A=m
# CONFIG_REGULATOR_MAX1586 is not set
CONFIG_REGULATOR_MAX8649=m
CONFIG_REGULATOR_MAX8660=m
CONFIG_REGULATOR_MAX8952=m
CONFIG_REGULATOR_LP3971=m
# CONFIG_REGULATOR_LP3972 is not set
# CONFIG_REGULATOR_PCF50633 is not set
CONFIG_REGULATOR_TPS6105X=m
# CONFIG_REGULATOR_TPS62360 is not set
# CONFIG_REGULATOR_TPS65023 is not set
CONFIG_REGULATOR_TPS6507X=m
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_DRM=m
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=m
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=m
CONFIG_SOUND=m
# CONFIG_SOUND_OSS_CORE is not set
# CONFIG_SND is not set
# CONFIG_SOUND_PRIME is not set

#
# HID support
#
CONFIG_HID=m
# CONFIG_HIDRAW is not set
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB_ARCH_HAS_XHCI is not set
CONFIG_USB_SUPPORT=y
CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
CONFIG_MMC=m
CONFIG_MMC_DEBUG=y
CONFIG_MMC_UNSAFE_RESUME=y

#
# MMC/SD/SDIO Card Drivers
#
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
# CONFIG_MMC_BLOCK_BOUNCE is not set
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_SDHCI is not set
CONFIG_MEMSTICK=m
# CONFIG_MEMSTICK_DEBUG is not set

#
# MemoryStick drivers
#
CONFIG_MEMSTICK_UNSAFE_RESUME=y
# CONFIG_MSPRO_BLOCK is not set

#
# MemoryStick Host Controller Drivers
#
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
CONFIG_LEDS_GPIO=m
CONFIG_LEDS_LP3944=m
CONFIG_LEDS_LP5521=m
# CONFIG_LEDS_LP5523 is not set
CONFIG_LEDS_PCA955X=m
# CONFIG_LEDS_PCA9633 is not set
# CONFIG_LEDS_REGULATOR is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_LT3593 is not set
# CONFIG_LEDS_MC13783 is not set
CONFIG_LEDS_TCA6507=m
# CONFIG_LEDS_LM3556 is not set
CONFIG_LEDS_OT200=m
CONFIG_LEDS_TRIGGERS=y

#
# LED Triggers
#
CONFIG_LEDS_TRIGGER_TIMER=m
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_GPIO is not set
# CONFIG_LEDS_TRIGGER_DEFAULT_ON is not set

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
# CONFIG_ACCESSIBILITY is not set
# CONFIG_EDAC is not set
# CONFIG_RTC_CLASS is not set
CONFIG_DMADEVICES=y
CONFIG_DMADEVICES_DEBUG=y
CONFIG_DMADEVICES_VDEBUG=y

#
# DMA Devices
#
# CONFIG_TIMB_DMA is not set
CONFIG_DMA_ENGINE=y

#
# DMA Clients
#
# CONFIG_NET_DMA is not set
# CONFIG_ASYNC_TX_DMA is not set
CONFIG_DMATEST=m
CONFIG_AUXDISPLAY=y
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_UIO=m
CONFIG_UIO_PDRV=m
CONFIG_UIO_PDRV_GENIRQ=m
CONFIG_VIRTIO=m
CONFIG_VIRTIO_RING=m

#
# Virtio drivers
#
CONFIG_VIRTIO_BALLOON=m

#
# Microsoft Hyper-V guest support
#
CONFIG_STAGING=y
CONFIG_ECHO=m
CONFIG_COMEDI=m
# CONFIG_COMEDI_DEBUG is not set
CONFIG_COMEDI_DEFAULT_BUF_SIZE_KB=2048
CONFIG_COMEDI_DEFAULT_BUF_MAXSIZE_KB=20480
# CONFIG_COMEDI_MISC_DRIVERS is not set
# CONFIG_COMEDI_PCMCIA_DRIVERS is not set
CONFIG_COMEDI_8255=m
# CONFIG_PANEL is not set
CONFIG_RTLLIB=m
CONFIG_RTLLIB_CRYPTO_CCMP=m
# CONFIG_RTLLIB_CRYPTO_TKIP is not set
CONFIG_RTLLIB_CRYPTO_WEP=m
CONFIG_ZRAM=m
CONFIG_ZRAM_DEBUG=y
CONFIG_ZSMALLOC=m
# CONFIG_WLAGS49_H2 is not set
CONFIG_WLAGS49_H25=m
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
CONFIG_TOUCHSCREEN_CLEARPAD_TM1217=m
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
CONFIG_STAGING_MEDIA=y

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_PHONE is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_WIMAX_GDM72XX is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_SENSORS_HDAPS=m
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_SAMSUNG_Q10=m

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_SUPPORT=y

#
# Remoteproc drivers (EXPERIMENTAL)
#

#
# Rpmsg drivers (EXPERIMENTAL)
#
CONFIG_VIRT_DRIVERS=y
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
CONFIG_MEMORY=y
# CONFIG_IIO is not set
# CONFIG_PWM is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
CONFIG_EDD_OFF=y
# CONFIG_FIRMWARE_MEMMAP is not set
# CONFIG_DELL_RBU is not set
CONFIG_DCDBAS=m
CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
# CONFIG_EXT2_FS_XATTR is not set
CONFIG_EXT2_FS_XIP=y
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT23=y
# CONFIG_EXT4_FS_XATTR is not set
CONFIG_EXT4_DEBUG=y
CONFIG_FS_XIP=y
CONFIG_JBD2=m
CONFIG_JBD2_DEBUG=y
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
# CONFIG_JFS_SECURITY is not set
CONFIG_JFS_DEBUG=y
CONFIG_JFS_STATISTICS=y
# CONFIG_XFS_FS is not set
CONFIG_GFS2_FS=m
# CONFIG_OCFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
# CONFIG_QUOTA is not set
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=m
# CONFIG_FUSE_FS is not set

#
# Caches
#
CONFIG_FSCACHE=m
# CONFIG_FSCACHE_STATS is not set
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_FSCACHE_OBJECT_LIST=y
# CONFIG_CACHEFILES is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
# CONFIG_ZISOFS is not set
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=m
CONFIG_NLS_DEFAULT="iso8859-1"
# CONFIG_NLS_CODEPAGE_437 is not set
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
CONFIG_NLS_CODEPAGE_852=m
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
CONFIG_NLS_CODEPAGE_860=m
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
CONFIG_NLS_CODEPAGE_863=m
# CONFIG_NLS_CODEPAGE_864 is not set
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
CONFIG_NLS_CODEPAGE_950=m
# CONFIG_NLS_CODEPAGE_932 is not set
CONFIG_NLS_CODEPAGE_949=m
# CONFIG_NLS_CODEPAGE_874 is not set
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
# CONFIG_NLS_ISO8859_7 is not set
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_MAC_ROMAN=m
# CONFIG_NLS_MAC_CELTIC is not set
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
CONFIG_NLS_MAC_GREEK=m
# CONFIG_NLS_MAC_ICELAND is not set
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
# CONFIG_NLS_MAC_TURKISH is not set
# CONFIG_NLS_UTF8 is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
CONFIG_SLUB_STATS=y
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
# CONFIG_PROVE_RCU is not set
# CONFIG_SPARSE_RCU_POINTER is not set
CONFIG_LOCKDEP=y
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_LOCKDEP is not set
CONFIG_TRACE_IRQFLAGS=y
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
CONFIG_DEBUG_LOCKING_API_SELFTESTS=y
CONFIG_STACKTRACE=y
CONFIG_DEBUG_STACK_USAGE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_LIST=y
# CONFIG_TEST_LIST_SORT is not set
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
CONFIG_RCU_TRACE=y
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
CONFIG_CPU_NOTIFIER_ERROR_INJECT=m
# CONFIG_FAULT_INJECTION is not set
CONFIG_LATENCYTOP=y
CONFIG_DEBUG_PAGEALLOC=y
CONFIG_WANT_PAGE_DEBUG_FLAGS=y
CONFIG_PAGE_GUARD=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
# CONFIG_EVENT_POWER_TRACING_DEPRECATED is not set
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
# CONFIG_FUNCTION_GRAPH_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACE_BRANCH_PROFILING=y
# CONFIG_BRANCH_PROFILE_NONE is not set
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_PROFILE_ALL_BRANCHES=y
# CONFIG_BRANCH_TRACER is not set
CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
# CONFIG_UPROBE_EVENT is not set
# CONFIG_PROBE_EVENTS is not set
CONFIG_DYNAMIC_FTRACE=y
# CONFIG_FUNCTION_PROFILER is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_SELFTEST=y
CONFIG_FTRACE_STARTUP_TEST=y
# CONFIG_EVENT_TRACE_TEST_SYSCALLS is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DMA_API_DEBUG=y
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_TEST_KSTRTOX is not set
CONFIG_STRICT_DEVMEM=y
# CONFIG_X86_VERBOSE_BOOTUP is not set
# CONFIG_EARLY_PRINTK is not set
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_X86_PTDUMP=y
# CONFIG_DEBUG_RODATA is not set
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
# CONFIG_IO_DELAY_0X80 is not set
CONFIG_IO_DELAY_0XED=y
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=1
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
CONFIG_DEBUG_NMI_SELFTEST=y

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_TRUSTED_KEYS is not set
CONFIG_ENCRYPTED_KEYS=m
# CONFIG_KEYS_DEBUG_PROC_KEYS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_PATH=y
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
CONFIG_SECURITY_YAMA=y
CONFIG_INTEGRITY=y
# CONFIG_INTEGRITY_SIGNATURE is not set
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_AUDIT=y
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_YAMA=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="yama"
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=m
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=m
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_GF128MUL=m
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_ABLK_HELPER_X86=m
CONFIG_CRYPTO_GLUE_HELPER_X86=m

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
# CONFIG_CRYPTO_GCM is not set
CONFIG_CRYPTO_SEQIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=m
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_PCBC is not set
CONFIG_CRYPTO_XTS=m

#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
# CONFIG_CRYPTO_CRC32C_INTEL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
CONFIG_CRYPTO_RMD160=m
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
CONFIG_CRYPTO_SHA256=m
# CONFIG_CRYPTO_SHA512 is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=m
CONFIG_CRYPTO_AES_X86_64=m
# CONFIG_CRYPTO_AES_NI_INTEL is not set
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
# CONFIG_CRYPTO_CAMELLIA is not set
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
# CONFIG_CRYPTO_DES is not set
# CONFIG_CRYPTO_FCRYPT is not set
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
CONFIG_CRYPTO_TEA=m
# CONFIG_CRYPTO_TWOFISH is not set
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_ZLIB is not set
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRYPTO_USER_API=m
# CONFIG_CRYPTO_USER_API_HASH is not set
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
# CONFIG_CRC_T10DIF is not set
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=y
CONFIG_CRC32_SELFTEST=y
# CONFIG_CRC32_SLICEBY8 is not set
CONFIG_CRC32_SLICEBY4=y
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
CONFIG_CRC8=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
# CONFIG_XZ_DEC is not set
# CONFIG_XZ_DEC_BCJ is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_AVERAGE=y
CONFIG_CORDIC=m
CONFIG_DDR=y

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 13:06       ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-12 13:06 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

[-- Attachment #1: Type: message/external-body, Size: 509 bytes --]

[-- Attachment #2: dmesg-kvm-slim-4225-2012-07-12-19-15-31 --]
[-- Type: text/plain, Size: 28151 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-08414-g9645fff (kbuild@snb) (gcc version 4.7.0 (Debian 4.7.0-11) ) #15 SMP Thu Jul 12 19:12:36 CST 2012
[    0.000000] Command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000093bff] usable
[    0.000000] BIOS-e820: [mem 0x0000000000093c00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000fdac0-0x000fdacf] mapped at [ffff8800000fdac0]
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff88000008d000] 8d000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 127940(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c6ce01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x00092fff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65408
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3901 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: fdad0-fdbec
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 0000000000093000 - 0000000000094000
[    0.000000] PM: Registered nosave memory: 0000000000094000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfeffbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64378
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: trinity=10m tree=mm:akpm auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw link=vmlinuz-2012-07-12-19-14-51-mm-origin.akpm-674d249-9645fff-x86_64-randconfig-mm7-1-slim BOOT_IMAGE=kernel-tests/kernels/x86_64-randconfig-mm7/9645fffacccf3082c94097b03e5f950e4713f18a/vmlinuz-3.5.0-rc6-08414-g9645fff
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 199892k/262132k available (4847k kernel code, 500k absent, 61740k reserved, 7791k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 2999.462 MHz processor
[    0.001999] Calibrating delay loop (skipped) preset value.. 5998.92 BogoMIPS (lpj=2999462)
[    0.003010] pid_max: default: 32768 minimum: 301
[    0.005213] Security Framework initialized
[    0.006075] Yama: becoming mindful.
[    0.008740] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.011850] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.014061] Mount-cache hash table entries: 256
[    0.018011] Initializing cgroup subsys debug
[    0.019012] Initializing cgroup subsys freezer
[    0.020010] Initializing cgroup subsys perf_event
[    0.021165] Disabled fast string operations
[    0.024612] ftrace: allocating 11013 entries in 44 pages
[    0.033344] Getting VERSION: 50014
[    0.034015] Getting VERSION: 50014
[    0.035014] Getting ID: 0
[    0.035731] Getting ID: ff000000
[    0.036014] Getting LVT0: 8700
[    0.037011] Getting LVT1: 8400
[    0.038084] enabled ExtINT on CPU#0
[    0.040907] ENABLING IO-APIC IRQs
[    0.041011] init IO_APIC IRQs
[    0.042007]  apic 2 pin 0 not connected
[    0.043041] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.045035] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.047032] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.049046] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.051027] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.053027] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.055027] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.057026] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.059037] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.062029] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.064029] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.066023] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.068026] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.070025] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.073004] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.075020]  apic 2 pin 16 not connected
[    0.075999]  apic 2 pin 17 not connected
[    0.076999]  apic 2 pin 18 not connected
[    0.077999]  apic 2 pin 19 not connected
[    0.078999]  apic 2 pin 20 not connected
[    0.079999]  apic 2 pin 21 not connected
[    0.080999]  apic 2 pin 22 not connected
[    0.081999]  apic 2 pin 23 not connected
[    0.083158] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.084998] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.087425] Using local APIC timer interrupts.
[    0.087425] calibrating APIC timer ...
[    0.090992] ... lapic delta = 6249032
[    0.090992] ..... delta 6249032
[    0.090992] ..... mult: 268434682
[    0.090992] ..... calibration result: 999845
[    0.090992] ..... CPU clock speed is 2998.0997 MHz.
[    0.090992] ..... host bus clock speed is 999.0845 MHz.
[    0.090992] ... verify APIC timer
[    0.201346] ... jiffies delta = 100
[    0.201984] ... jiffies result ok
[    0.203030] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.207035] ------------[ cut here ]------------
[    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
[    0.207977] Modules linked in:
[    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
[    0.207977] Call Trace:
[    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
[    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
[    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
[    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
[    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
[    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
[    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
[    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
[    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
[    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
[    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
[    0.207977] ---[ end trace 5eb91373aeac2b15 ]---
[    0.210519] Testing tracer nop: PASSED
[    0.212314] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.215909] SMP alternatives: lockdep: fixing up alternatives
[    0.216992] smpboot: Booting Node   0, Processors  #1 OK
[    0.001999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.001999] masked ExtINT on CPU#1
[    0.001999] Disabled fast string operations
[    0.233973] TSC synchronization [CPU#0 -> CPU#1]:
[    0.233973] Measured 1551 cycles TSC warp between CPUs, turning off TSC clock.
[    0.233973] tsc: Marking TSC unstable due to check_tsc_sync_source failed
[    0.244338] Brought up 2 CPUs
[    0.244988] ----------------
[    0.245746] | NMI testsuite:
[    0.245976] --------------------
[    0.246976]   remote IPI:  ok  |
[    0.251287]    local IPI:  ok  |
[    0.256982] --------------------
[    0.257844] Good, all   2 testcases passed! |
[    0.258974] ---------------------------------
[    0.259976] smpboot: Total of 2 processors activated (11997.84 BogoMIPS)
[    0.262415] CPU0 attaching sched-domain:
[    0.262979]  domain 0: span 0-1 level CPU
[    0.264443]   groups: 0 (cpu_power = 1023) 1
[    0.265676] CPU1 attaching sched-domain:
[    0.265976]  domain 0: span 0-1 level CPU
[    0.267973]   groups: 1 0 (cpu_power = 1023)
[    0.277762] devtmpfs: initialized
[    0.278040] device: 'platform': device_add
[    0.279040] PM: Adding info for No Bus:platform
[    0.281100] bus: 'platform': registered
[    0.282097] bus: 'cpu': registered
[    0.282977] device: 'cpu': device_add
[    0.288670] PM: Adding info for No Bus:cpu
[    0.289057] bus: 'memory': registered
[    0.289975] device: 'memory': device_add
[    0.290996] PM: Adding info for No Bus:memory
[    0.293022] device: 'memory0': device_add
[    0.294004] bus: 'memory': add device memory0
[    0.301519] PM: Adding info for memory:memory0
[    0.302133] device: 'memory1': device_add
[    0.302977] bus: 'memory': add device memory1
[    0.304997] PM: Adding info for memory:memory1
[    0.322930] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.323973] device class 'regulator': registering
[    0.326225] Registering platform device 'reg-dummy'. Parent at platform
[    0.335503] device: 'reg-dummy': device_add
[    0.335986] bus: 'platform': add device reg-dummy
[    0.337979] PM: Adding info for platform:reg-dummy
[    0.339011] bus: 'platform': add driver reg-dummy
[    0.339974] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.341966] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.352600] device: 'regulator.0': device_add
[    0.353991] PM: Adding info for No Bus:regulator.0
[    0.355105] dummy: 
[    0.356032] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.357006] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.365639] RTC time: 11:15:27, date: 07/12/12
[    0.367178] NET: Registered protocol family 16
[    0.368221] device class 'bdi': registering
[    0.369003] device class 'tty': registering
[    0.370005] bus: 'node': registered
[    0.370963] device: 'node': device_add
[    0.378514] PM: Adding info for No Bus:node
[    0.379975] device class 'dma': registering
[    0.381072] device: 'node0': device_add
[    0.381964] bus: 'node': add device node0
[    0.382983] PM: Adding info for node:node0
[    0.384059] device: 'cpu0': device_add
[    0.391500] bus: 'cpu': add device cpu0
[    0.391982] PM: Adding info for cpu:cpu0
[    0.393007] device: 'cpu1': device_add
[    0.394015] bus: 'cpu': add device cpu1
[    0.394982] PM: Adding info for cpu:cpu1
[    0.395990] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.397953] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.399953] mtrr: probably your BIOS does not setup all CPUs.
[    0.400953] mtrr: corrected configuration.
[    0.414025] device: 'default': device_add
[    0.415055] PM: Adding info for No Bus:default
[    0.418486] bio: create slab <bio-0> at 0
[    0.419082] device class 'block': registering
[    0.421070] device class 'misc': registering
[    0.422218] bus: 'serio': registered
[    0.422962] device class 'input': registering
[    0.426047] device class 'power_supply': registering
[    0.426983] device class 'watchdog': registering
[    0.428039] device class 'net': registering
[    0.430169] device: 'lo': device_add
[    0.431185] PM: Adding info for No Bus:lo
[    0.431604] Switching to clocksource kvm-clock
[    0.436812] Warning: could not register all branches stats
[    0.438281] Warning: could not register annotated branches stats
[    0.561660] device class 'mem': registering
[    0.562848] device: 'mem': device_add
[    0.564244] PM: Adding info for No Bus:mem
[    0.565406] device: 'kmem': device_add
[    0.566698] PM: Adding info for No Bus:kmem
[    0.567942] device: 'null': device_add
[    0.569141] PM: Adding info for No Bus:null
[    0.570280] device: 'zero': device_add
[    0.571499] PM: Adding info for No Bus:zero
[    0.572649] device: 'full': device_add
[    0.573805] PM: Adding info for No Bus:full
[    0.574929] device: 'random': device_add
[    0.576239] PM: Adding info for No Bus:random
[    0.577487] device: 'urandom': device_add
[    0.578784] PM: Adding info for No Bus:urandom
[    0.579994] device: 'kmsg': device_add
[    0.581186] PM: Adding info for No Bus:kmsg
[    0.582333] device: 'tty': device_add
[    0.583552] PM: Adding info for No Bus:tty
[    0.584866] device: 'console': device_add
[    0.586191] PM: Adding info for No Bus:console
[    0.587491] NET: Registered protocol family 1
[    0.589321] Unpacking initramfs...
[    2.786882] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    2.871676] DMA-API: preallocated 32768 debug entries
[    2.873030] DMA-API: debugging enabled by kernel config
[    2.874668] Registering platform device 'rtc_cmos'. Parent at platform
[    2.876377] device: 'rtc_cmos': device_add
[    2.877481] bus: 'platform': add device rtc_cmos
[    2.878840] PM: Adding info for platform:rtc_cmos
[    2.880110] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    2.882497] device: 'snapshot': device_add
[    2.883820] PM: Adding info for No Bus:snapshot
[    2.885128] bus: 'clocksource': registered
[    2.886236] device: 'clocksource': device_add
[    2.887415] PM: Adding info for No Bus:clocksource
[    2.888714] device: 'clocksource0': device_add
[    2.889895] bus: 'clocksource': add device clocksource0
[    2.891313] PM: Adding info for clocksource:clocksource0
[    2.892734] bus: 'platform': add driver alarmtimer
[    2.894050] Registering platform device 'alarmtimer'. Parent at platform
[    2.895808] device: 'alarmtimer': device_add
[    2.896943] bus: 'platform': add device alarmtimer
[    2.898261] PM: Adding info for platform:alarmtimer
[    2.899553] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    2.901860] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    2.904029] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    2.905872] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    2.908139] audit: initializing netlink socket (disabled)
[    2.909625] type=2000 audit(1342091729.908:1): initialized
[    2.923090] Testing tracer function: PASSED
[    3.083849] Testing dynamic ftrace: PASSED
[    3.347420] Testing dynamic ftrace ops #1: [    3.374759] kwatchdog (24) used greatest stack depth: 6584 bytes left
(1 0 1 1 0) (1 1 2 1 0) 

[-- Attachment #3: config-3.5.0-rc6-08414-g9645fff --]
[-- Type: text/plain, Size: 50953 bytes --]

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.5.0-rc6 Kernel Configuration
#
CONFIG_64BIT=y
# CONFIG_X86_32 is not set
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
# CONFIG_GENERIC_ISA_DMA is not set
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_GPIO=y
# CONFIG_ARCH_MAY_HAVE_PC_FDC is not set
# CONFIG_RWSEM_GENERIC_SPINLOCK is not set
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_DEFAULT_IDLE=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_CONSTRUCTORS=y
CONFIG_HAVE_IRQ_WORK=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
# CONFIG_EXPERIMENTAL is not set
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
# CONFIG_KERNEL_GZIP is not set
CONFIG_KERNEL_BZIP2=y
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_FHANDLE=y
CONFIG_TASKSTATS=y
# CONFIG_TASK_DELAY_ACCT is not set
# CONFIG_TASK_XACCT is not set
CONFIG_AUDIT=y
# CONFIG_AUDITSYSCALL is not set
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_DOMAIN=y
# CONFIG_IRQ_DOMAIN_DEBUG is not set
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
# CONFIG_NO_HZ is not set
CONFIG_HIGH_RES_TIMERS=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
CONFIG_TREE_RCU_TRACE=y
CONFIG_IKCONFIG=y
# CONFIG_IKCONFIG_PROC is not set
CONFIG_LOG_BUF_SHIFT=17
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
# CONFIG_CGROUP_CPUACCT is not set
# CONFIG_RESOURCE_COUNTERS is not set
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
# CONFIG_NAMESPACES is not set
CONFIG_SCHED_AUTOGROUP=y
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
# CONFIG_RD_XZ is not set
CONFIG_RD_LZO=y
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_EXPERT=y
# CONFIG_UID16 is not set
CONFIG_SYSCTL_SYSCALL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_HOTPLUG=y
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
# CONFIG_EPOLL is not set
# CONFIG_SIGNALFD is not set
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
# CONFIG_SHMEM is not set
CONFIG_AIO=y
CONFIG_EMBEDDED=y
CONFIG_HAVE_PERF_EVENTS=y
CONFIG_PERF_USE_VMALLOC=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
CONFIG_DEBUG_PERF_USE_VMALLOC=y
CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y

#
# GCOV-based kernel profiling
#
CONFIG_GCOV_KERNEL=y
CONFIG_GCOV_PROFILE_ALL=y
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
# CONFIG_MODULE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
CONFIG_MODULE_SRCVERSION_ALL=y
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
# CONFIG_BLK_DEV_INTEGRITY is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
# CONFIG_MINIX_SUBPARTITION is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
# CONFIG_SGI_PARTITION is not set
CONFIG_ULTRIX_PARTITION=y
# CONFIG_SUN_PARTITION is not set
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
CONFIG_SYSV68_PARTITION=y
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=m
# CONFIG_IOSCHED_CFQ is not set
CONFIG_DEFAULT_NOOP=y
CONFIG_DEFAULT_IOSCHED="noop"
# CONFIG_INLINE_SPIN_TRYLOCK is not set
# CONFIG_INLINE_SPIN_TRYLOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK is not set
# CONFIG_INLINE_SPIN_LOCK_BH is not set
# CONFIG_INLINE_SPIN_LOCK_IRQ is not set
# CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set
CONFIG_UNINLINE_SPIN_UNLOCK=y
# CONFIG_INLINE_SPIN_UNLOCK_BH is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQ is not set
# CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_READ_TRYLOCK is not set
# CONFIG_INLINE_READ_LOCK is not set
# CONFIG_INLINE_READ_LOCK_BH is not set
# CONFIG_INLINE_READ_LOCK_IRQ is not set
# CONFIG_INLINE_READ_LOCK_IRQSAVE is not set
# CONFIG_INLINE_READ_UNLOCK is not set
# CONFIG_INLINE_READ_UNLOCK_BH is not set
# CONFIG_INLINE_READ_UNLOCK_IRQ is not set
# CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set
# CONFIG_INLINE_WRITE_TRYLOCK is not set
# CONFIG_INLINE_WRITE_LOCK is not set
# CONFIG_INLINE_WRITE_LOCK_BH is not set
# CONFIG_INLINE_WRITE_LOCK_IRQ is not set
# CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set
# CONFIG_INLINE_WRITE_UNLOCK is not set
# CONFIG_INLINE_WRITE_UNLOCK_BH is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQ is not set
# CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set
# CONFIG_MUTEX_SPIN_ON_OWNER is not set
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
# CONFIG_KVMTOOL_TEST_ENABLE is not set
CONFIG_PARAVIRT_GUEST=y
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
# CONFIG_XEN is not set
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_KVM_CLOCK=y
CONFIG_KVM_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_CMPXCHG=y
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_XADD=y
CONFIG_X86_WP_WORKS_OK=y
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
# CONFIG_CPU_SUP_AMD is not set
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
# CONFIG_DMI is not set
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
CONFIG_NR_CPUS=8
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set
# CONFIG_X86_MCE is not set
# CONFIG_I8K is not set
# CONFIG_MICROCODE is not set
# CONFIG_X86_MSR is not set
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
# CONFIG_DIRECT_GBPAGES is not set
CONFIG_NUMA=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
# CONFIG_SPARSEMEM_VMEMMAP is not set
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=999999
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
# CONFIG_CROSS_MEMORY_ATTACH is not set
CONFIG_CLEANCACHE=y
# CONFIG_FRONTSWAP is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
# CONFIG_X86_PAT is not set
# CONFIG_ARCH_RANDOM is not set
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
CONFIG_PM_AUTOSLEEP=y
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM_RUNTIME=y
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
CONFIG_PM_SLEEP_DEBUG=y
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
CONFIG_CPU_IDLE=y
CONFIG_CPU_IDLE_GOV_LADDER=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
# CONFIG_INTEL_IDLE is not set

#
# Memory power savings
#

#
# Bus options (PCI etc.)
#
# CONFIG_PCI is not set
# CONFIG_ARCH_SUPPORTS_MSI is not set
# CONFIG_ISA_DMA_API is not set
CONFIG_PCCARD=m
CONFIG_PCMCIA=m

#
# PC-card bridges
#

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
# CONFIG_HAVE_AOUT is not set
# CONFIG_BINFMT_MISC is not set
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y

#
# Networking options
#
CONFIG_PACKET=m
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
# CONFIG_NET_KEY is not set
# CONFIG_INET is not set
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETFILTER is not set
CONFIG_ATM=m
# CONFIG_ATM_LANE is not set
CONFIG_STP=m
CONFIG_BRIDGE=m
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=m
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_PHONET is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
CONFIG_NET_SCH_HTB=m
# CONFIG_NET_SCH_HFSC is not set
CONFIG_NET_SCH_ATM=m
# CONFIG_NET_SCH_PRIO is not set
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
# CONFIG_NET_SCH_SFB is not set
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
CONFIG_NET_SCH_DSMARK=m
# CONFIG_NET_SCH_NETEM is not set
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
# CONFIG_NET_SCH_CODEL is not set
CONFIG_NET_SCH_FQ_CODEL=m
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
# CONFIG_CLS_U32_PERF is not set
CONFIG_CLS_U32_MARK=y
# CONFIG_NET_CLS_RSVP is not set
CONFIG_NET_CLS_RSVP6=m
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
# CONFIG_NET_EMATCH_TEXT is not set
# CONFIG_NET_CLS_ACT is not set
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_NETPRIO_CGROUP is not set
CONFIG_BQL=y
CONFIG_BPF_JIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_HAMRADIO=y

#
# Packet Radio protocols
#
# CONFIG_AX25 is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_SPY=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
CONFIG_CFG80211_REG_DEBUG=y
# CONFIG_CFG80211_DEFAULT_PS is not set
# CONFIG_CFG80211_DEBUGFS is not set
# CONFIG_CFG80211_INTERNAL_REGDB is not set
CONFIG_CFG80211_WEXT=y
CONFIG_LIB80211=m
CONFIG_LIB80211_DEBUG=y
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_PID=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_MINSTREL_HT=y
# CONFIG_MAC80211_RC_DEFAULT_PID is not set
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_WIMAX=m
CONFIG_WIMAX_DEBUG_LEVEL=8
# CONFIG_RFKILL is not set
CONFIG_RFKILL_REGULATOR=m
CONFIG_NET_9P=m
# CONFIG_NET_9P_VIRTIO is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
# CONFIG_DEVTMPFS_MOUNT is not set
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=m
# CONFIG_FIRMWARE_IN_KERNEL is not set
CONFIG_EXTRA_FIRMWARE=""
CONFIG_DEBUG_DRIVER=y
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_SYS_HYPERVISOR is not set
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_CONNECTOR is not set
# CONFIG_MTD is not set
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_PC_PCMCIA=m
# CONFIG_PARPORT_GSC is not set
CONFIG_PARPORT_AX88796=m
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_BLK_DEV=y
# CONFIG_PARIDE is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
# CONFIG_BLK_DEV_LOOP is not set

#
# DRBD disabled because PROC_FS, INET or CONNECTOR not selected
#
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
# CONFIG_BLK_DEV_HD is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
CONFIG_ENCLOSURE_SERVICES=m
# CONFIG_APDS9802ALS is not set
CONFIG_ISL29003=m
CONFIG_ISL29020=m
# CONFIG_SENSORS_TSL2550 is not set
CONFIG_SENSORS_BH1780=m
CONFIG_SENSORS_BH1770=m
# CONFIG_SENSORS_APDS990X is not set
CONFIG_HMC6352=m
# CONFIG_VMWARE_BALLOON is not set
CONFIG_BMP085=y
CONFIG_BMP085_I2C=m
# CONFIG_USB_SWITCH_FSA9480 is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
CONFIG_EEPROM_LEGACY=m
# CONFIG_EEPROM_93CX6 is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
CONFIG_ALTERA_STAPL=m
CONFIG_HAVE_IDE=y
CONFIG_IDE=m

#
# Please see Documentation/ide/ide.txt for help/info on IDE drives
#
CONFIG_IDE_XFER_MODE=y
CONFIG_IDE_TIMINGS=y
CONFIG_IDE_ATAPI=y
# CONFIG_BLK_DEV_IDE_SATA is not set
# CONFIG_IDE_GD is not set
# CONFIG_BLK_DEV_IDECS is not set
# CONFIG_BLK_DEV_IDECD is not set
CONFIG_BLK_DEV_IDETAPE=m
CONFIG_IDE_TASK_IOCTL=y
CONFIG_IDE_PROC_FS=y

#
# IDE chipset support/bugfixes
#
# CONFIG_IDE_GENERIC is not set
# CONFIG_BLK_DEV_PLATFORM is not set
CONFIG_BLK_DEV_CMD640=m
CONFIG_BLK_DEV_CMD640_ENHANCED=y
# CONFIG_BLK_DEV_IDEDMA is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
# CONFIG_SCSI_NETLINK is not set
# CONFIG_SCSI_PROC_FS is not set

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
# CONFIG_CHR_DEV_OSST is not set
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
# CONFIG_SCSI_ENCLOSURE is not set
CONFIG_SCSI_MULTI_LUN=y
# CONFIG_SCSI_CONSTANTS is not set
# CONFIG_SCSI_LOGGING is not set
# CONFIG_SCSI_SCAN_ASYNC is not set

#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
# CONFIG_SCSI_SAS_HOST_SMP is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_SCSI_LOWLEVEL_PCMCIA=y
# CONFIG_PCMCIA_AHA152X is not set
# CONFIG_PCMCIA_FDOMAIN is not set
CONFIG_PCMCIA_QLOGIC=m
# CONFIG_PCMCIA_SYM53C500 is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_SATA_MV=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ARASAN_CF=m

#
# PIO-only SFF controllers
#
CONFIG_PATA_PCMCIA=m
CONFIG_PATA_PLATFORM=m

#
# Generic fallback / legacy drivers
#
CONFIG_MD=y
# CONFIG_BLK_DEV_MD is not set
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
# CONFIG_DM_CRYPT is not set
# CONFIG_DM_SNAPSHOT is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_UEVENT is not set
CONFIG_TARGET_CORE=m
# CONFIG_TCM_IBLOCK is not set
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
# CONFIG_ISCSI_TARGET is not set
CONFIG_MACINTOSH_DRIVERS=y
# CONFIG_MAC_EMUMOUSEBTN is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_DUMMY=m
CONFIG_EQUALIZER=m
CONFIG_MII=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
# CONFIG_TUN is not set
CONFIG_VETH=m
CONFIG_ARCNET=m
CONFIG_ARCNET_1201=m
CONFIG_ARCNET_1051=m
# CONFIG_ARCNET_RAW is not set
# CONFIG_ARCNET_CAP is not set
CONFIG_ARCNET_COM90xx=m
# CONFIG_ARCNET_COM90xxIO is not set
# CONFIG_ARCNET_RIM_I is not set
CONFIG_ARCNET_COM20020=m
CONFIG_ARCNET_COM20020_CS=m
CONFIG_ATM_DRIVERS=y
CONFIG_ATM_DUMMY=m

#
# CAIF transport drivers
#
CONFIG_ETHERNET=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_NET_VENDOR_AMD=y
CONFIG_PCMCIA_NMCLAN=m
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
# CONFIG_DNET is not set
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
CONFIG_NET_VENDOR_NATSEMI=y
CONFIG_NET_VENDOR_8390=y
CONFIG_PCMCIA_AXNET=m
CONFIG_PCMCIA_PCNET=m
CONFIG_ETHOC=m
# CONFIG_NET_VENDOR_REALTEK is not set
CONFIG_NET_VENDOR_SMSC=y
CONFIG_PCMCIA_SMC91C92=m
CONFIG_NET_VENDOR_STMICRO=y
CONFIG_STMMAC_ETH=m
# CONFIG_STMMAC_PLATFORM is not set
CONFIG_STMMAC_DEBUG_FS=y
# CONFIG_STMMAC_DA is not set
# CONFIG_STMMAC_RING is not set
CONFIG_STMMAC_CHAINED=y
CONFIG_NET_VENDOR_WIZNET=y
CONFIG_WIZNET_W5100=m
CONFIG_WIZNET_W5300=m
# CONFIG_WIZNET_BUS_DIRECT is not set
# CONFIG_WIZNET_BUS_INDIRECT is not set
CONFIG_WIZNET_BUS_ANY=y
# CONFIG_NET_VENDOR_XIRCOM is not set
CONFIG_PHYLIB=m

#
# MII PHY device drivers
#
CONFIG_AMD_PHY=m
# CONFIG_MARVELL_PHY is not set
# CONFIG_DAVICOM_PHY is not set
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
# CONFIG_SMSC_PHY is not set
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
CONFIG_STE10XP=m
# CONFIG_LSI_ET1011C_PHY is not set
CONFIG_MICREL_PHY=m
CONFIG_MDIO_BITBANG=m
CONFIG_MDIO_GPIO=m
# CONFIG_PLIP is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
# CONFIG_PPP_FILTER is not set
# CONFIG_PPPOATM is not set
# CONFIG_PPP_ASYNC is not set
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
# CONFIG_SLIP_COMPRESSED is not set
# CONFIG_SLIP_SMART is not set
# CONFIG_SLIP_MODE_SLIP6 is not set
CONFIG_WLAN=y
# CONFIG_PCMCIA_RAYCS is not set
CONFIG_LIBERTAS_THINFIRM=m
CONFIG_LIBERTAS_THINFIRM_DEBUG=y
# CONFIG_ATMEL is not set
CONFIG_AIRO_CS=m
# CONFIG_MAC80211_HWSIM is not set
CONFIG_ATH_COMMON=m
# CONFIG_ATH_DEBUG is not set
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
# CONFIG_ATH9K_BTCOEX_SUPPORT is not set
CONFIG_ATH9K=m
CONFIG_ATH9K_AHB=y
CONFIG_ATH9K_DEBUGFS=y
# CONFIG_ATH9K_DFS_CERTIFIED is not set
CONFIG_ATH9K_MAC_DEBUG=y
# CONFIG_ATH9K_RATE_CONTROL is not set
# CONFIG_ATH6KL is not set
CONFIG_B43=m
CONFIG_B43_BCMA=y
# CONFIG_B43_BCMA_EXTRA is not set
CONFIG_B43_SSB=y
# CONFIG_B43_PCMCIA is not set
CONFIG_B43_BCMA_PIO=y
CONFIG_B43_PIO=y
# CONFIG_B43_PHY_LP is not set
CONFIG_B43_LEDS=y
CONFIG_B43_HWRNG=y
# CONFIG_B43_DEBUG is not set
CONFIG_B43LEGACY=m
CONFIG_B43LEGACY_LEDS=y
CONFIG_B43LEGACY_HWRNG=y
# CONFIG_B43LEGACY_DEBUG is not set
CONFIG_B43LEGACY_DMA=y
CONFIG_B43LEGACY_PIO=y
CONFIG_B43LEGACY_DMA_AND_PIO_MODE=y
# CONFIG_B43LEGACY_DMA_MODE is not set
# CONFIG_B43LEGACY_PIO_MODE is not set
CONFIG_BRCMUTIL=m
CONFIG_BRCMSMAC=m
# CONFIG_BRCMFMAC is not set
# CONFIG_BRCMDBG is not set
# CONFIG_HOSTAP is not set
# CONFIG_LIBERTAS is not set
# CONFIG_HERMES is not set
CONFIG_RT2X00=m
CONFIG_WL_TI=y
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
# CONFIG_MWIFIEX is not set

#
# WiMAX Wireless Broadband devices
#

#
# Enable USB support to see WiMAX USB drivers
#
CONFIG_WAN=y
CONFIG_HDLC=m
# CONFIG_HDLC_RAW is not set
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
# CONFIG_HDLC_FR is not set
# CONFIG_HDLC_PPP is not set

#
# X.25/LAPB support is disabled
#
# CONFIG_DLCI is not set
# CONFIG_SBNI is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
CONFIG_INPUT_MATRIXKMAP=m

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=m
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=m
CONFIG_INPUT_EVBUG=m

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ADP5588=m
CONFIG_KEYBOARD_ADP5589=m
CONFIG_KEYBOARD_ATKBD=y
CONFIG_KEYBOARD_QT1070=m
# CONFIG_KEYBOARD_LKKBD is not set
CONFIG_KEYBOARD_GPIO=m
CONFIG_KEYBOARD_GPIO_POLLED=m
CONFIG_KEYBOARD_TCA6416=m
CONFIG_KEYBOARD_TCA8418=m
CONFIG_KEYBOARD_MATRIX=m
CONFIG_KEYBOARD_LM8323=m
# CONFIG_KEYBOARD_LM8333 is not set
CONFIG_KEYBOARD_MAX7359=m
CONFIG_KEYBOARD_MCS=m
# CONFIG_KEYBOARD_MPR121 is not set
CONFIG_KEYBOARD_NEWTON=m
CONFIG_KEYBOARD_OPENCORES=m
CONFIG_KEYBOARD_STOWAWAY=m
# CONFIG_KEYBOARD_SUNKBD is not set
CONFIG_KEYBOARD_OMAP4=m
CONFIG_KEYBOARD_XTKBD=m
CONFIG_INPUT_MOUSE=y
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
CONFIG_JOYSTICK_ANALOG=m
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
CONFIG_JOYSTICK_GF2K=m
CONFIG_JOYSTICK_GRIP=m
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
CONFIG_JOYSTICK_INTERACT=m
# CONFIG_JOYSTICK_SIDEWINDER is not set
CONFIG_JOYSTICK_TMDC=m
CONFIG_JOYSTICK_IFORCE=m
CONFIG_JOYSTICK_IFORCE_232=y
CONFIG_JOYSTICK_WARRIOR=m
CONFIG_JOYSTICK_MAGELLAN=m
CONFIG_JOYSTICK_SPACEORB=m
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
CONFIG_JOYSTICK_TWIDJOY=m
CONFIG_JOYSTICK_ZHENHUA=m
CONFIG_JOYSTICK_DB9=m
CONFIG_JOYSTICK_GAMECON=m
# CONFIG_JOYSTICK_TURBOGRAFX is not set
CONFIG_JOYSTICK_AS5011=m
CONFIG_JOYSTICK_JOYDUMP=m
# CONFIG_JOYSTICK_XPAD is not set
# CONFIG_JOYSTICK_WALKERA0701 is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=m
CONFIG_SERIO_CT82C710=m
# CONFIG_SERIO_PARKBD is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_GAMEPORT=m
# CONFIG_GAMEPORT_NS558 is not set
# CONFIG_GAMEPORT_L4 is not set

#
# Character devices
#
# CONFIG_VT is not set
# CONFIG_UNIX98_PTYS is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_TRACE_ROUTER is not set
CONFIG_TRACE_SINK=m
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
# CONFIG_SERIAL_8250_CS is not set
CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
# CONFIG_SERIAL_8250_EXTENDED is not set

#
# Non-8250 serial port support
#
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
CONFIG_SERIAL_ALTERA_UART=m
CONFIG_SERIAL_ALTERA_UART_MAXPORTS=4
CONFIG_SERIAL_ALTERA_UART_BAUDRATE=115200
# CONFIG_SERIAL_XILINX_PS_UART is not set
CONFIG_TTY_PRINTK=y
# CONFIG_PRINTER is not set
CONFIG_PPDEV=m
CONFIG_HVC_DRIVER=y
CONFIG_VIRTIO_CONSOLE=m
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=m
CONFIG_HW_RANDOM_TIMERIOMEM=m
# CONFIG_HW_RANDOM_VIA is not set
# CONFIG_HW_RANDOM_VIRTIO is not set
# CONFIG_NVRAM is not set
# CONFIG_RTC is not set
# CONFIG_GEN_RTC is not set
CONFIG_R3964=m

#
# PCMCIA character devices
#
CONFIG_SYNCLINK_CS=m
CONFIG_CARDMAN_4000=m
CONFIG_CARDMAN_4040=m
CONFIG_IPWIRELESS=m
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=m
CONFIG_MAX_RAW_DEVS=256
CONFIG_HANGCHECK_TIMER=m
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_NSC is not set
# CONFIG_TCG_ATMEL is not set
CONFIG_I2C=m
CONFIG_I2C_BOARDINFO=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
CONFIG_I2C_MUX_GPIO=m
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=m

#
# I2C Hardware Bus support
#

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
CONFIG_I2C_GPIO=m
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
CONFIG_I2C_SIMTEC=m

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_PARPORT is not set
# CONFIG_I2C_PARPORT_LIGHT is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
CONFIG_I2C_DEBUG_BUS=y
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#

#
# PPS generators support
#

#
# PTP clock support
#

#
# Enable Device Drivers -> PPS to see the PTP clock options.
#
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIOLIB=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_MAX730X=m

#
# Memory mapped GPIO drivers:
#
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_IT8761E=m

#
# I2C GPIO expanders:
#
CONFIG_GPIO_MAX7300=m
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_ADP5588 is not set

#
# PCI GPIO expanders:
#

#
# SPI GPIO expanders:
#
CONFIG_GPIO_MCP23S08=m

#
# AC97 GPIO expanders:
#

#
# MODULbus GPIO expanders:
#
CONFIG_W1=m

#
# 1-wire Bus Masters
#
# CONFIG_W1_MASTER_DS1WM is not set
# CONFIG_W1_MASTER_GPIO is not set

#
# 1-wire Slaves
#
CONFIG_W1_SLAVE_THERM=m
# CONFIG_W1_SLAVE_SMEM is not set
CONFIG_W1_SLAVE_DS2408=m
# CONFIG_W1_SLAVE_DS2423 is not set
CONFIG_W1_SLAVE_DS2431=m
CONFIG_W1_SLAVE_DS2433=m
# CONFIG_W1_SLAVE_DS2433_CRC is not set
# CONFIG_W1_SLAVE_DS2760 is not set
CONFIG_W1_SLAVE_DS2780=m
CONFIG_W1_SLAVE_DS2781=m
# CONFIG_W1_SLAVE_DS28E04 is not set
CONFIG_W1_SLAVE_BQ27000=m
CONFIG_POWER_SUPPLY=y
CONFIG_POWER_SUPPLY_DEBUG=y
# CONFIG_PDA_POWER is not set
CONFIG_TEST_POWER=m
CONFIG_BATTERY_DS2780=m
CONFIG_BATTERY_DS2781=m
CONFIG_BATTERY_DS2782=m
CONFIG_BATTERY_SBS=m
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
CONFIG_CHARGER_PCF50633=m
# CONFIG_CHARGER_MAX8903 is not set
CONFIG_CHARGER_LP8727=m
CONFIG_CHARGER_GPIO=m
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=m
CONFIG_HWMON_VID=m
CONFIG_HWMON_DEBUG_CHIP=y

#
# Native drivers
#
CONFIG_SENSORS_ADM1021=m
# CONFIG_SENSORS_ADM1025 is not set
CONFIG_SENSORS_ADM1026=m
# CONFIG_SENSORS_ADM1029 is not set
CONFIG_SENSORS_ADM1031=m
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7475=m
CONFIG_SENSORS_ASC7621=m
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
CONFIG_SENSORS_FSCHMD=m
CONFIG_SENSORS_G760A=m
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
# CONFIG_SENSORS_GPIO_FAN is not set
# CONFIG_SENSORS_IT87 is not set
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_LM63 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
# CONFIG_SENSORS_LM78 is not set
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
# CONFIG_SENSORS_LM85 is not set
CONFIG_SENSORS_LM87=m
# CONFIG_SENSORS_LM90 is not set
CONFIG_SENSORS_LM92=m
# CONFIG_SENSORS_LM93 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
# CONFIG_SENSORS_PC87360 is not set
CONFIG_SENSORS_PC87427=m
# CONFIG_SENSORS_PCF8591 is not set
CONFIG_SENSORS_SHT15=m
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SCH56XX_COMMON=m
# CONFIG_SENSORS_SCH5627 is not set
CONFIG_SENSORS_SCH5636=m
CONFIG_SENSORS_ADS1015=m
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_THMC50 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VT1211=m
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
CONFIG_SENSORS_W83792D=m
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
CONFIG_SENSORS_APPLESMC=m
# CONFIG_SENSORS_MC13783_ADC is not set
CONFIG_THERMAL=m
CONFIG_THERMAL_HWMON=y
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
CONFIG_WATCHDOG_NOWAYOUT=y

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
CONFIG_ADVANTECH_WDT=m
CONFIG_SC520_WDT=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
CONFIG_IBMASR=m
CONFIG_WAFER_WDT=m
# CONFIG_IT8712F_WDT is not set
CONFIG_SC1200_WDT=m
CONFIG_PC87413_WDT=m
# CONFIG_60XX_WDT is not set
CONFIG_SBC8360_WDT=m
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
CONFIG_W83697UG_WDT=m
CONFIG_W83877F_WDT=m
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
CONFIG_SSB=m
CONFIG_SSB_BLOCKIO=y
CONFIG_SSB_PCMCIAHOST_POSSIBLE=y
# CONFIG_SSB_PCMCIAHOST is not set
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_SILENT=y
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
CONFIG_BCMA=m
CONFIG_BCMA_BLOCKIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=m
CONFIG_MFD_SM501=m
# CONFIG_MFD_SM501_GPIO is not set
CONFIG_HTC_PASIC3=m
# CONFIG_MFD_LM3533 is not set
CONFIG_TPS6105X=m
# CONFIG_TPS65010 is not set
CONFIG_TPS6507X=m
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_ARIZONA_I2C is not set
CONFIG_MFD_PCF50633=m
CONFIG_PCF50633_ADC=m
# CONFIG_PCF50633_GPIO is not set
CONFIG_MFD_MC13783=m
CONFIG_MFD_MC13XXX=m
CONFIG_MFD_MC13XXX_I2C=m
CONFIG_ABX500_CORE=y
# CONFIG_MFD_WL1273_CORE is not set
CONFIG_REGULATOR=y
CONFIG_REGULATOR_DEBUG=y
# CONFIG_REGULATOR_DUMMY is not set
CONFIG_REGULATOR_FIXED_VOLTAGE=m
CONFIG_REGULATOR_VIRTUAL_CONSUMER=m
CONFIG_REGULATOR_USERSPACE_CONSUMER=m
# CONFIG_REGULATOR_GPIO is not set
CONFIG_REGULATOR_AD5398=m
# CONFIG_REGULATOR_MC13783 is not set
# CONFIG_REGULATOR_MC13892 is not set
CONFIG_REGULATOR_ISL6271A=m
# CONFIG_REGULATOR_MAX1586 is not set
CONFIG_REGULATOR_MAX8649=m
CONFIG_REGULATOR_MAX8660=m
CONFIG_REGULATOR_MAX8952=m
CONFIG_REGULATOR_LP3971=m
# CONFIG_REGULATOR_LP3972 is not set
# CONFIG_REGULATOR_PCF50633 is not set
CONFIG_REGULATOR_TPS6105X=m
# CONFIG_REGULATOR_TPS62360 is not set
# CONFIG_REGULATOR_TPS65023 is not set
CONFIG_REGULATOR_TPS6507X=m
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_DRM=m
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=m
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=m
CONFIG_SOUND=m
# CONFIG_SOUND_OSS_CORE is not set
# CONFIG_SND is not set
# CONFIG_SOUND_PRIME is not set

#
# HID support
#
CONFIG_HID=m
# CONFIG_HIDRAW is not set
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB_ARCH_HAS_XHCI is not set
CONFIG_USB_SUPPORT=y
CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#
# CONFIG_USB_GADGET is not set

#
# OTG and related infrastructure
#
CONFIG_MMC=m
CONFIG_MMC_DEBUG=y
CONFIG_MMC_UNSAFE_RESUME=y

#
# MMC/SD/SDIO Card Drivers
#
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
# CONFIG_MMC_BLOCK_BOUNCE is not set
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_SDHCI is not set
CONFIG_MEMSTICK=m
# CONFIG_MEMSTICK_DEBUG is not set

#
# MemoryStick drivers
#
CONFIG_MEMSTICK_UNSAFE_RESUME=y
# CONFIG_MSPRO_BLOCK is not set

#
# MemoryStick Host Controller Drivers
#
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
CONFIG_LEDS_GPIO=m
CONFIG_LEDS_LP3944=m
CONFIG_LEDS_LP5521=m
# CONFIG_LEDS_LP5523 is not set
CONFIG_LEDS_PCA955X=m
# CONFIG_LEDS_PCA9633 is not set
# CONFIG_LEDS_REGULATOR is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_LT3593 is not set
# CONFIG_LEDS_MC13783 is not set
CONFIG_LEDS_TCA6507=m
# CONFIG_LEDS_LM3556 is not set
CONFIG_LEDS_OT200=m
CONFIG_LEDS_TRIGGERS=y

#
# LED Triggers
#
CONFIG_LEDS_TRIGGER_TIMER=m
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_GPIO is not set
# CONFIG_LEDS_TRIGGER_DEFAULT_ON is not set

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
# CONFIG_ACCESSIBILITY is not set
# CONFIG_EDAC is not set
# CONFIG_RTC_CLASS is not set
CONFIG_DMADEVICES=y
CONFIG_DMADEVICES_DEBUG=y
CONFIG_DMADEVICES_VDEBUG=y

#
# DMA Devices
#
# CONFIG_TIMB_DMA is not set
CONFIG_DMA_ENGINE=y

#
# DMA Clients
#
# CONFIG_NET_DMA is not set
# CONFIG_ASYNC_TX_DMA is not set
CONFIG_DMATEST=m
CONFIG_AUXDISPLAY=y
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_UIO=m
CONFIG_UIO_PDRV=m
CONFIG_UIO_PDRV_GENIRQ=m
CONFIG_VIRTIO=m
CONFIG_VIRTIO_RING=m

#
# Virtio drivers
#
CONFIG_VIRTIO_BALLOON=m

#
# Microsoft Hyper-V guest support
#
CONFIG_STAGING=y
CONFIG_ECHO=m
CONFIG_COMEDI=m
# CONFIG_COMEDI_DEBUG is not set
CONFIG_COMEDI_DEFAULT_BUF_SIZE_KB=2048
CONFIG_COMEDI_DEFAULT_BUF_MAXSIZE_KB=20480
# CONFIG_COMEDI_MISC_DRIVERS is not set
# CONFIG_COMEDI_PCMCIA_DRIVERS is not set
CONFIG_COMEDI_8255=m
# CONFIG_PANEL is not set
CONFIG_RTLLIB=m
CONFIG_RTLLIB_CRYPTO_CCMP=m
# CONFIG_RTLLIB_CRYPTO_TKIP is not set
CONFIG_RTLLIB_CRYPTO_WEP=m
CONFIG_ZRAM=m
CONFIG_ZRAM_DEBUG=y
CONFIG_ZSMALLOC=m
# CONFIG_WLAGS49_H2 is not set
CONFIG_WLAGS49_H25=m
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
CONFIG_TOUCHSCREEN_CLEARPAD_TM1217=m
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
CONFIG_STAGING_MEDIA=y

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_PHONE is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_WIMAX_GDM72XX is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_SENSORS_HDAPS=m
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_SAMSUNG_Q10=m

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_CLKBLD_I8253=y
CONFIG_IOMMU_SUPPORT=y

#
# Remoteproc drivers (EXPERIMENTAL)
#

#
# Rpmsg drivers (EXPERIMENTAL)
#
CONFIG_VIRT_DRIVERS=y
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
CONFIG_MEMORY=y
# CONFIG_IIO is not set
# CONFIG_PWM is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
CONFIG_EDD_OFF=y
# CONFIG_FIRMWARE_MEMMAP is not set
# CONFIG_DELL_RBU is not set
CONFIG_DCDBAS=m
CONFIG_ISCSI_IBFT_FIND=y
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
# CONFIG_EXT2_FS_XATTR is not set
CONFIG_EXT2_FS_XIP=y
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT23=y
# CONFIG_EXT4_FS_XATTR is not set
CONFIG_EXT4_DEBUG=y
CONFIG_FS_XIP=y
CONFIG_JBD2=m
CONFIG_JBD2_DEBUG=y
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
# CONFIG_REISERFS_PROC_INFO is not set
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
# CONFIG_JFS_SECURITY is not set
CONFIG_JFS_DEBUG=y
CONFIG_JFS_STATISTICS=y
# CONFIG_XFS_FS is not set
CONFIG_GFS2_FS=m
# CONFIG_OCFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
# CONFIG_QUOTA is not set
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=m
# CONFIG_FUSE_FS is not set

#
# Caches
#
CONFIG_FSCACHE=m
# CONFIG_FSCACHE_STATS is not set
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
CONFIG_FSCACHE_OBJECT_LIST=y
# CONFIG_CACHEFILES is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
# CONFIG_ZISOFS is not set
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
# CONFIG_MSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_NTFS_FS is not set

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
# CONFIG_MISC_FILESYSTEMS is not set
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS=m
CONFIG_NLS_DEFAULT="iso8859-1"
# CONFIG_NLS_CODEPAGE_437 is not set
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
CONFIG_NLS_CODEPAGE_852=m
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
CONFIG_NLS_CODEPAGE_860=m
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
CONFIG_NLS_CODEPAGE_863=m
# CONFIG_NLS_CODEPAGE_864 is not set
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
CONFIG_NLS_CODEPAGE_950=m
# CONFIG_NLS_CODEPAGE_932 is not set
CONFIG_NLS_CODEPAGE_949=m
# CONFIG_NLS_CODEPAGE_874 is not set
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=m
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
# CONFIG_NLS_ISO8859_7 is not set
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_MAC_ROMAN=m
# CONFIG_NLS_MAC_CELTIC is not set
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
CONFIG_NLS_MAC_GREEK=m
# CONFIG_NLS_MAC_ICELAND is not set
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
# CONFIG_NLS_MAC_TURKISH is not set
# CONFIG_NLS_UTF8 is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_MAGIC_SYSRQ is not set
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_SHIRQ=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHED_DEBUG=y
CONFIG_SCHEDSTATS=y
# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
CONFIG_SLUB_STATS=y
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_PROVE_LOCKING=y
# CONFIG_PROVE_RCU is not set
# CONFIG_SPARSE_RCU_POINTER is not set
CONFIG_LOCKDEP=y
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_LOCKDEP is not set
CONFIG_TRACE_IRQFLAGS=y
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
CONFIG_DEBUG_LOCKING_API_SELFTESTS=y
CONFIG_STACKTRACE=y
CONFIG_DEBUG_STACK_USAGE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_DEBUG_LIST=y
# CONFIG_TEST_LIST_SORT is not set
CONFIG_DEBUG_SG=y
CONFIG_DEBUG_NOTIFIERS=y
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
CONFIG_RCU_TRACE=y
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
CONFIG_DEBUG_FORCE_WEAK_PER_CPU=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
CONFIG_CPU_NOTIFIER_ERROR_INJECT=m
# CONFIG_FAULT_INJECTION is not set
CONFIG_LATENCYTOP=y
CONFIG_DEBUG_PAGEALLOC=y
CONFIG_WANT_PAGE_DEBUG_FLAGS=y
CONFIG_PAGE_GUARD=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
# CONFIG_EVENT_POWER_TRACING_DEPRECATED is not set
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
# CONFIG_FUNCTION_GRAPH_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACE_BRANCH_PROFILING=y
# CONFIG_BRANCH_PROFILE_NONE is not set
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
CONFIG_PROFILE_ALL_BRANCHES=y
# CONFIG_BRANCH_TRACER is not set
CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y
# CONFIG_UPROBE_EVENT is not set
# CONFIG_PROBE_EVENTS is not set
CONFIG_DYNAMIC_FTRACE=y
# CONFIG_FUNCTION_PROFILER is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_FTRACE_SELFTEST=y
CONFIG_FTRACE_STARTUP_TEST=y
# CONFIG_EVENT_TRACE_TEST_SYSCALLS is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DMA_API_DEBUG=y
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_TEST_KSTRTOX is not set
CONFIG_STRICT_DEVMEM=y
# CONFIG_X86_VERBOSE_BOOTUP is not set
# CONFIG_EARLY_PRINTK is not set
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_X86_PTDUMP=y
# CONFIG_DEBUG_RODATA is not set
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
# CONFIG_IO_DELAY_0X80 is not set
CONFIG_IO_DELAY_0XED=y
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=1
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
CONFIG_DEBUG_NMI_SELFTEST=y

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_TRUSTED_KEYS is not set
CONFIG_ENCRYPTED_KEYS=m
# CONFIG_KEYS_DEBUG_PROC_KEYS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_PATH=y
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
CONFIG_SECURITY_YAMA=y
CONFIG_INTEGRITY=y
# CONFIG_INTEGRITY_SIGNATURE is not set
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_AUDIT=y
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_YAMA=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="yama"
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=m
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=m
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=m
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_CRYPTO_GF128MUL=m
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_ABLK_HELPER_X86=m
CONFIG_CRYPTO_GLUE_HELPER_X86=m

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
# CONFIG_CRYPTO_GCM is not set
CONFIG_CRYPTO_SEQIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=m
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_PCBC is not set
CONFIG_CRYPTO_XTS=m

#
# Hash modes
#
CONFIG_CRYPTO_HMAC=y

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
# CONFIG_CRYPTO_CRC32C_INTEL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
CONFIG_CRYPTO_RMD160=m
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
CONFIG_CRYPTO_SHA256=m
# CONFIG_CRYPTO_SHA512 is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=m
CONFIG_CRYPTO_AES_X86_64=m
# CONFIG_CRYPTO_AES_NI_INTEL is not set
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
# CONFIG_CRYPTO_CAMELLIA is not set
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST6=m
# CONFIG_CRYPTO_DES is not set
# CONFIG_CRYPTO_FCRYPT is not set
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
CONFIG_CRYPTO_TEA=m
# CONFIG_CRYPTO_TWOFISH is not set
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
# CONFIG_CRYPTO_ZLIB is not set
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRYPTO_USER_API=m
# CONFIG_CRYPTO_USER_API_HASH is not set
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
# CONFIG_CRC_T10DIF is not set
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=y
CONFIG_CRC32_SELFTEST=y
# CONFIG_CRC32_SLICEBY8 is not set
CONFIG_CRC32_SLICEBY4=y
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
CONFIG_CRC8=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
# CONFIG_XZ_DEC is not set
# CONFIG_XZ_DEC_BCJ is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_AVERAGE=y
CONFIG_CORDIC=m
CONFIG_DDR=y

[-- Attachment #4: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 13:06       ` Fengguang Wu
  (?)
@ 2012-07-12 17:05         ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:05 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

Hello, Fengguang.

On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> [    0.207977] Modules linked in:
> [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> [    0.207977] Call Trace:
> [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13

Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
nr_running before looking at pool->nr_running.  The warning is
spurious.  Will post fix soon.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 17:05         ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:05 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hello, Fengguang.

On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> [    0.207977] Modules linked in:
> [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> [    0.207977] Call Trace:
> [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13

Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
nr_running before looking at pool->nr_running.  The warning is
spurious.  Will post fix soon.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 17:05         ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:05 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, Fengguang.

On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> [    0.207977] Modules linked in:
> [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> [    0.207977] Call Trace:
> [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13

Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
nr_running before looking at pool->nr_running.  The warning is
spurious.  Will post fix soon.

Thanks.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
  2012-07-10 23:30       ` Tony Luck
  (?)
@ 2012-07-12 17:06         ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:06 UTC (permalink / raw)
  To: Tony Luck
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hello, Tony.

On Tue, Jul 10, 2012 at 04:30:36PM -0700, Tony Luck wrote:
> On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj@kernel.org> wrote:
> > @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
> >          */
> >         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
> >                      pool->nr_workers == pool->nr_idle &&
> > -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> > +                    atomic_read(get_pool_nr_running(pool)));
> >  }
> 
> Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
> haven't bisected ... just noticed  that two patches in this series tinker
> with lines in this check. next-20120706 didn't generate the WARN.

Sorry about the delay.  The warning is spurious.  As now there are
multiple pools, nr_running check should be done before
pool->nr_workers check.  Will post fix soon.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-12 17:06         ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:06 UTC (permalink / raw)
  To: Tony Luck
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hello, Tony.

On Tue, Jul 10, 2012 at 04:30:36PM -0700, Tony Luck wrote:
> On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj@kernel.org> wrote:
> > @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
> >          */
> >         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
> >                      pool->nr_workers == pool->nr_idle &&
> > -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> > +                    atomic_read(get_pool_nr_running(pool)));
> >  }
> 
> Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
> haven't bisected ... just noticed  that two patches in this series tinker
> with lines in this check. next-20120706 didn't generate the WARN.

Sorry about the delay.  The warning is spurious.  As now there are
multiple pools, nr_running check should be done before
pool->nr_workers check.  Will post fix soon.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable
@ 2012-07-12 17:06         ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:06 UTC (permalink / raw)
  To: Tony Luck
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, Tony.

On Tue, Jul 10, 2012 at 04:30:36PM -0700, Tony Luck wrote:
> On Mon, Jul 9, 2012 at 11:41 AM, Tejun Heo <tj@kernel.org> wrote:
> > @@ -1234,7 +1235,7 @@ static void worker_enter_idle(struct worker *worker)
> >          */
> >         WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
> >                      pool->nr_workers == pool->nr_idle &&
> > -                    atomic_read(get_gcwq_nr_running(gcwq->cpu)));
> > +                    atomic_read(get_pool_nr_running(pool)));
> >  }
> 
> Just had this WARN_ON_ONCE trigger on ia64 booting next-20120710. I
> haven't bisected ... just noticed  that two patches in this series tinker
> with lines in this check. next-20120706 didn't generate the WARN.

Sorry about the delay.  The warning is spurious.  As now there are
multiple pools, nr_running check should be done before
pool->nr_workers check.  Will post fix soon.

Thank you.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
  2012-07-10  4:48     ` Namhyung Kim
  (?)
@ 2012-07-12 17:07       ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:07 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hello, Namhyung.

Sorry about the delay.

On Tue, Jul 10, 2012 at 01:48:44PM +0900, Namhyung Kim wrote:
> > +	struct list_head	idle_list;	/* X: list of idle workers */
> > +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> > +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> 
> What is 'dworkers'?

My stupid finger pressing 'd' when I never meant to. :)

> > -	/* workers are chained either in the idle_list or busy_hash */
> > -	struct list_head	idle_list;	/* X: list of idle workers */
> > +	/* workers are chained either in busy_head or pool idle_list */
> 
> s/busy_head/busy_hash/ ?

Will fix.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-12 17:07       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:07 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen

Hello, Namhyung.

Sorry about the delay.

On Tue, Jul 10, 2012 at 01:48:44PM +0900, Namhyung Kim wrote:
> > +	struct list_head	idle_list;	/* X: list of idle workers */
> > +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> > +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> 
> What is 'dworkers'?

My stupid finger pressing 'd' when I never meant to. :)

> > -	/* workers are chained either in the idle_list or busy_hash */
> > -	struct list_head	idle_list;	/* X: list of idle workers */
> > +	/* workers are chained either in busy_head or pool idle_list */
> 
> s/busy_head/busy_hash/ ?

Will fix.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-12 17:07       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 17:07 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, Namhyung.

Sorry about the delay.

On Tue, Jul 10, 2012 at 01:48:44PM +0900, Namhyung Kim wrote:
> > +	struct list_head	idle_list;	/* X: list of idle workers */
> > +	struct timer_list	idle_timer;	/* L: worker idle timeout */
> > +	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
> 
> What is 'dworkers'?

My stupid finger pressing 'd' when I never meant to. :)

> > -	/* workers are chained either in the idle_list or busy_hash */
> > -	struct list_head	idle_list;	/* X: list of idle workers */
> > +	/* workers are chained either in busy_head or pool idle_list */
> 
> s/busy_head/busy_hash/ ?

Will fix.

Thanks.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 17:05         ` Tejun Heo
  (?)
@ 2012-07-12 21:45           ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:45 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

Hello, again.

On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > [    0.207977] Modules linked in:
> > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > [    0.207977] Call Trace:
> > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> 
> Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> nr_running before looking at pool->nr_running.  The warning is
> spurious.  Will post fix soon.

I was wrong and am now dazed and confused.  That's from
init_workqueues() where only cpu0 is running.  How the hell did
nr_running manage to become non-zero at that point?  Can you please
apply the following patch and report the boot log?  Thank you.

---
 kernel/workqueue.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -3855,6 +3858,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3875,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 21:45           ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:45 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

Hello, again.

On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > [    0.207977] Modules linked in:
> > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > [    0.207977] Call Trace:
> > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> 
> Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> nr_running before looking at pool->nr_running.  The warning is
> spurious.  Will post fix soon.

I was wrong and am now dazed and confused.  That's from
init_workqueues() where only cpu0 is running.  How the hell did
nr_running manage to become non-zero at that point?  Can you please
apply the following patch and report the boot log?  Thank you.

---
 kernel/workqueue.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -3855,6 +3858,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3875,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 21:45           ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:45 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	Tony Luck, linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, again.

On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > [    0.207977] Modules linked in:
> > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > [    0.207977] Call Trace:
> > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> 
> Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> nr_running before looking at pool->nr_running.  The warning is
> spurious.  Will post fix soon.

I was wrong and am now dazed and confused.  That's from
init_workqueues() where only cpu0 is running.  How the hell did
nr_running manage to become non-zero at that point?  Can you please
apply the following patch and report the boot log?  Thank you.

---
 kernel/workqueue.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -3855,6 +3858,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3875,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 2/6] workqueue: factor out worker_pool from global_cwq
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-12 21:49       ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:49 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

>From bd7bdd43dcb81bb08240b9401b36a104f77dc135 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Date: Thu, 12 Jul 2012 14:46:37 -0700

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

v2: Comment typo fixes as suggested by Namhyung.

Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Namhyung Kim <namhyung-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
Minor update.  git branches updated accoringly.  Thanks.

 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..61f1544 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for workers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_hash or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-12 21:49       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From bd7bdd43dcb81bb08240b9401b36a104f77dc135 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Thu, 12 Jul 2012 14:46:37 -0700

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

v2: Comment typo fixes as suggested by Namhyung.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
---
Minor update.  git branches updated accoringly.  Thanks.

 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..61f1544 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for workers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_hash or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 2/6] workqueue: factor out worker_pool from global_cwq
@ 2012-07-12 21:49       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 21:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	xfs, joshhunt00, davem, vgoyal, johan.hedberg

>From bd7bdd43dcb81bb08240b9401b36a104f77dc135 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Thu, 12 Jul 2012 14:46:37 -0700

Move worklist and all worker management fields from global_cwq into
the new struct worker_pool.  worker_pool points back to the containing
gcwq.  worker and cpu_workqueue_struct are updated to point to
worker_pool instead of gcwq too.

This change is mechanical and doesn't introduce any functional
difference other than rearranging of fields and an added level of
indirection in some places.  This is to prepare for multiple pools per
gcwq.

v2: Comment typo fixes as suggested by Namhyung.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
---
Minor update.  git branches updated accoringly.  Thanks.

 include/trace/events/workqueue.h |    2 +-
 kernel/workqueue.c               |  216 ++++++++++++++++++++-----------------
 2 files changed, 118 insertions(+), 100 deletions(-)

diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 4018f50..f28d1b6 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@ TRACE_EVENT(workqueue_queue_work,
 		__entry->function	= work->func;
 		__entry->workqueue	= cwq->wq;
 		__entry->req_cpu	= req_cpu;
-		__entry->cpu		= cwq->gcwq->cpu;
+		__entry->cpu		= cwq->pool->gcwq->cpu;
 	),
 
 	TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 27637c2..61f1544 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -115,6 +115,7 @@ enum {
  */
 
 struct global_cwq;
+struct worker_pool;
 
 /*
  * The poor guys doing the actual heavy lifting.  All on-duty workers
@@ -131,7 +132,7 @@ struct worker {
 	struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
 	struct list_head	scheduled;	/* L: scheduled works */
 	struct task_struct	*task;		/* I: worker task */
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	/* 64 bytes boundary on 64bit, 32 on 32bit */
 	unsigned long		last_active;	/* L: last active timestamp */
 	unsigned int		flags;		/* X: flags */
@@ -139,6 +140,21 @@ struct worker {
 	struct work_struct	rebind_work;	/* L: rebind worker to cpu */
 };
 
+struct worker_pool {
+	struct global_cwq	*gcwq;		/* I: the owning gcwq */
+
+	struct list_head	worklist;	/* L: list of pending works */
+	int			nr_workers;	/* L: total number of workers */
+	int			nr_idle;	/* L: currently idle ones */
+
+	struct list_head	idle_list;	/* X: list of idle workers */
+	struct timer_list	idle_timer;	/* L: worker idle timeout */
+	struct timer_list	mayday_timer;	/* L: SOS timer for workers */
+
+	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker		*first_idle;	/* L: first idle worker */
+};
+
 /*
  * Global per-cpu workqueue.  There's one and only one for each cpu
  * and all works are queued and processed here regardless of their
@@ -146,27 +162,18 @@ struct worker {
  */
 struct global_cwq {
 	spinlock_t		lock;		/* the gcwq lock */
-	struct list_head	worklist;	/* L: list of pending works */
 	unsigned int		cpu;		/* I: the associated cpu */
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
-	int			nr_workers;	/* L: total number of workers */
-	int			nr_idle;	/* L: currently idle ones */
-
-	/* workers are chained either in the idle_list or busy_hash */
-	struct list_head	idle_list;	/* X: list of idle workers */
+	/* workers are chained either in busy_hash or pool idle_list */
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct timer_list	idle_timer;	/* L: worker idle timeout */
-	struct timer_list	mayday_timer;	/* L: SOS timer for dworkers */
-
-	struct ida		worker_ida;	/* L: for worker IDs */
+	struct worker_pool	pool;		/* the worker pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
 	wait_queue_head_t	trustee_wait;	/* trustee wait */
-	struct worker		*first_idle;	/* L: first idle worker */
 } ____cacheline_aligned_in_smp;
 
 /*
@@ -175,7 +182,7 @@ struct global_cwq {
  * aligned at two's power of the number of flag bits.
  */
 struct cpu_workqueue_struct {
-	struct global_cwq	*gcwq;		/* I: the associated gcwq */
+	struct worker_pool	*pool;		/* I: the associated pool */
 	struct workqueue_struct *wq;		/* I: the owning workqueue */
 	int			work_color;	/* L: current color */
 	int			flush_color;	/* L: flushing color */
@@ -555,7 +562,7 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 
 	if (data & WORK_STRUCT_CWQ)
 		return ((struct cpu_workqueue_struct *)
-			(data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+			(data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
 
 	cpu = data >> WORK_STRUCT_FLAG_BITS;
 	if (cpu == WORK_CPU_NONE)
@@ -587,13 +594,13 @@ static bool __need_more_worker(struct global_cwq *gcwq)
  */
 static bool need_more_worker(struct global_cwq *gcwq)
 {
-	return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+	return !list_empty(&gcwq->pool.worklist) && __need_more_worker(gcwq);
 }
 
 /* Can I start working?  Called from busy but !running workers. */
 static bool may_start_working(struct global_cwq *gcwq)
 {
-	return gcwq->nr_idle;
+	return gcwq->pool.nr_idle;
 }
 
 /* Do I need to keep working?  Called from currently running workers. */
@@ -601,7 +608,7 @@ static bool keep_working(struct global_cwq *gcwq)
 {
 	atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
 
-	return !list_empty(&gcwq->worklist) &&
+	return !list_empty(&gcwq->pool.worklist) &&
 		(atomic_read(nr_running) <= 1 ||
 		 gcwq->flags & GCWQ_HIGHPRI_PENDING);
 }
@@ -622,8 +629,8 @@ static bool need_to_manage_workers(struct global_cwq *gcwq)
 static bool too_many_workers(struct global_cwq *gcwq)
 {
 	bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
-	int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
-	int nr_busy = gcwq->nr_workers - nr_idle;
+	int nr_idle = gcwq->pool.nr_idle + managing; /* manager is considered idle */
+	int nr_busy = gcwq->pool.nr_workers - nr_idle;
 
 	return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
 }
@@ -635,10 +642,10 @@ static bool too_many_workers(struct global_cwq *gcwq)
 /* Return the first worker.  Safe with preemption disabled */
 static struct worker *first_worker(struct global_cwq *gcwq)
 {
-	if (unlikely(list_empty(&gcwq->idle_list)))
+	if (unlikely(list_empty(&gcwq->pool.idle_list)))
 		return NULL;
 
-	return list_first_entry(&gcwq->idle_list, struct worker, entry);
+	return list_first_entry(&gcwq->pool.idle_list, struct worker, entry);
 }
 
 /**
@@ -696,7 +703,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 				       unsigned int cpu)
 {
 	struct worker *worker = kthread_data(task), *to_wakeup = NULL;
-	struct global_cwq *gcwq = get_gcwq(cpu);
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	atomic_t *nr_running = get_gcwq_nr_running(cpu);
 
 	if (worker->flags & WORKER_NOT_RUNNING)
@@ -716,7 +724,7 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 	 * could be manipulating idle_list, so dereferencing idle_list
 	 * without gcwq lock is safe.
 	 */
-	if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
+	if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
 		to_wakeup = first_worker(gcwq);
 	return to_wakeup ? to_wakeup->task : NULL;
 }
@@ -737,7 +745,8 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task,
 static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 				    bool wakeup)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	WARN_ON_ONCE(worker->task != current);
 
@@ -752,7 +761,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
 
 		if (wakeup) {
 			if (atomic_dec_and_test(nr_running) &&
-			    !list_empty(&gcwq->worklist))
+			    !list_empty(&pool->worklist))
 				wake_up_worker(gcwq);
 		} else
 			atomic_dec(nr_running);
@@ -773,7 +782,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags,
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	unsigned int oflags = worker->flags;
 
 	WARN_ON_ONCE(worker->task != current);
@@ -894,9 +903,9 @@ static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
 	struct work_struct *twork;
 
 	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &gcwq->worklist;
+		return &gcwq->pool.worklist;
 
-	list_for_each_entry(twork, &gcwq->worklist, entry) {
+	list_for_each_entry(twork, &gcwq->pool.worklist, entry) {
 		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
 
 		if (!(tcwq->wq->flags & WQ_HIGHPRI))
@@ -924,7 +933,7 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
 			struct work_struct *work, struct list_head *head,
 			unsigned int extra_flags)
 {
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct global_cwq *gcwq = cwq->pool->gcwq;
 
 	/* we own @work, set data and link */
 	set_work_cwq(work, cwq, extra_flags);
@@ -1196,7 +1205,8 @@ EXPORT_SYMBOL_GPL(queue_delayed_work_on);
  */
 static void worker_enter_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	BUG_ON(worker->flags & WORKER_IDLE);
 	BUG_ON(!list_empty(&worker->entry) &&
@@ -1204,15 +1214,15 @@ static void worker_enter_idle(struct worker *worker)
 
 	/* can't use worker_set_flags(), also called from start_worker() */
 	worker->flags |= WORKER_IDLE;
-	gcwq->nr_idle++;
+	pool->nr_idle++;
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	list_add(&worker->entry, &gcwq->idle_list);
+	list_add(&worker->entry, &pool->idle_list);
 
 	if (likely(!(worker->flags & WORKER_ROGUE))) {
-		if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
-			mod_timer(&gcwq->idle_timer,
+		if (too_many_workers(gcwq) && !timer_pending(&pool->idle_timer))
+			mod_timer(&pool->idle_timer,
 				  jiffies + IDLE_WORKER_TIMEOUT);
 	} else
 		wake_up_all(&gcwq->trustee_wait);
@@ -1223,7 +1233,7 @@ static void worker_enter_idle(struct worker *worker)
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
 	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     gcwq->nr_workers == gcwq->nr_idle &&
+		     pool->nr_workers == pool->nr_idle &&
 		     atomic_read(get_gcwq_nr_running(gcwq->cpu)));
 }
 
@@ -1238,11 +1248,11 @@ static void worker_enter_idle(struct worker *worker)
  */
 static void worker_leave_idle(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
 
 	BUG_ON(!(worker->flags & WORKER_IDLE));
 	worker_clr_flags(worker, WORKER_IDLE);
-	gcwq->nr_idle--;
+	pool->nr_idle--;
 	list_del_init(&worker->entry);
 }
 
@@ -1279,7 +1289,7 @@ static void worker_leave_idle(struct worker *worker)
 static bool worker_maybe_bind_and_lock(struct worker *worker)
 __acquires(&gcwq->lock)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	struct task_struct *task = worker->task;
 
 	while (true) {
@@ -1321,7 +1331,7 @@ __acquires(&gcwq->lock)
 static void worker_rebind_fn(struct work_struct *work)
 {
 	struct worker *worker = container_of(work, struct worker, rebind_work);
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 
 	if (worker_maybe_bind_and_lock(worker))
 		worker_clr_flags(worker, WORKER_REBIND);
@@ -1362,13 +1372,14 @@ static struct worker *alloc_worker(void)
 static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 {
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	struct worker_pool *pool = &gcwq->pool;
 	struct worker *worker = NULL;
 	int id = -1;
 
 	spin_lock_irq(&gcwq->lock);
-	while (ida_get_new(&gcwq->worker_ida, &id)) {
+	while (ida_get_new(&pool->worker_ida, &id)) {
 		spin_unlock_irq(&gcwq->lock);
-		if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+		if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
 			goto fail;
 		spin_lock_irq(&gcwq->lock);
 	}
@@ -1378,7 +1389,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 	if (!worker)
 		goto fail;
 
-	worker->gcwq = gcwq;
+	worker->pool = pool;
 	worker->id = id;
 
 	if (!on_unbound_cpu)
@@ -1409,7 +1420,7 @@ static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
 fail:
 	if (id >= 0) {
 		spin_lock_irq(&gcwq->lock);
-		ida_remove(&gcwq->worker_ida, id);
+		ida_remove(&pool->worker_ida, id);
 		spin_unlock_irq(&gcwq->lock);
 	}
 	kfree(worker);
@@ -1428,7 +1439,7 @@ fail:
 static void start_worker(struct worker *worker)
 {
 	worker->flags |= WORKER_STARTED;
-	worker->gcwq->nr_workers++;
+	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
 }
@@ -1444,7 +1455,8 @@ static void start_worker(struct worker *worker)
  */
 static void destroy_worker(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	int id = worker->id;
 
 	/* sanity check frenzy */
@@ -1452,9 +1464,9 @@ static void destroy_worker(struct worker *worker)
 	BUG_ON(!list_empty(&worker->scheduled));
 
 	if (worker->flags & WORKER_STARTED)
-		gcwq->nr_workers--;
+		pool->nr_workers--;
 	if (worker->flags & WORKER_IDLE)
-		gcwq->nr_idle--;
+		pool->nr_idle--;
 
 	list_del_init(&worker->entry);
 	worker->flags |= WORKER_DIE;
@@ -1465,7 +1477,7 @@ static void destroy_worker(struct worker *worker)
 	kfree(worker);
 
 	spin_lock_irq(&gcwq->lock);
-	ida_remove(&gcwq->worker_ida, id);
+	ida_remove(&pool->worker_ida, id);
 }
 
 static void idle_worker_timeout(unsigned long __gcwq)
@@ -1479,11 +1491,12 @@ static void idle_worker_timeout(unsigned long __gcwq)
 		unsigned long expires;
 
 		/* idle_list is kept in LIFO order, check the last one */
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires))
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 		else {
 			/* it's been idle for too long, wake up manager */
 			gcwq->flags |= GCWQ_MANAGE_WORKERS;
@@ -1504,7 +1517,7 @@ static bool send_mayday(struct work_struct *work)
 		return false;
 
 	/* mayday mayday mayday */
-	cpu = cwq->gcwq->cpu;
+	cpu = cwq->pool->gcwq->cpu;
 	/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
 	if (cpu == WORK_CPU_UNBOUND)
 		cpu = 0;
@@ -1527,13 +1540,13 @@ static void gcwq_mayday_timeout(unsigned long __gcwq)
 		 * allocation deadlock.  Send distress signals to
 		 * rescuers.
 		 */
-		list_for_each_entry(work, &gcwq->worklist, entry)
+		list_for_each_entry(work, &gcwq->pool.worklist, entry)
 			send_mayday(work);
 	}
 
 	spin_unlock_irq(&gcwq->lock);
 
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
 
 /**
@@ -1568,14 +1581,14 @@ restart:
 	spin_unlock_irq(&gcwq->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
-	mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+	mod_timer(&gcwq->pool.mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
 
 	while (true) {
 		struct worker *worker;
 
 		worker = create_worker(gcwq, true);
 		if (worker) {
-			del_timer_sync(&gcwq->mayday_timer);
+			del_timer_sync(&gcwq->pool.mayday_timer);
 			spin_lock_irq(&gcwq->lock);
 			start_worker(worker);
 			BUG_ON(need_to_create_worker(gcwq));
@@ -1592,7 +1605,7 @@ restart:
 			break;
 	}
 
-	del_timer_sync(&gcwq->mayday_timer);
+	del_timer_sync(&gcwq->pool.mayday_timer);
 	spin_lock_irq(&gcwq->lock);
 	if (need_to_create_worker(gcwq))
 		goto restart;
@@ -1622,11 +1635,12 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
 		struct worker *worker;
 		unsigned long expires;
 
-		worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+		worker = list_entry(gcwq->pool.idle_list.prev, struct worker,
+				    entry);
 		expires = worker->last_active + IDLE_WORKER_TIMEOUT;
 
 		if (time_before(jiffies, expires)) {
-			mod_timer(&gcwq->idle_timer, expires);
+			mod_timer(&gcwq->pool.idle_timer, expires);
 			break;
 		}
 
@@ -1659,7 +1673,7 @@ static bool maybe_destroy_workers(struct global_cwq *gcwq)
  */
 static bool manage_workers(struct worker *worker)
 {
-	struct global_cwq *gcwq = worker->gcwq;
+	struct global_cwq *gcwq = worker->pool->gcwq;
 	bool ret = false;
 
 	if (gcwq->flags & GCWQ_MANAGING_WORKERS)
@@ -1732,7 +1746,7 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
+	struct list_head *pos = gcwq_determine_ins_pos(cwq->pool->gcwq, cwq);
 
 	trace_workqueue_activate_work(work);
 	move_linked_works(work, pos, NULL);
@@ -1808,7 +1822,8 @@ __releases(&gcwq->lock)
 __acquires(&gcwq->lock)
 {
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
-	struct global_cwq *gcwq = cwq->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
@@ -1854,10 +1869,10 @@ __acquires(&gcwq->lock)
 	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
 	 */
 	if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&gcwq->worklist,
-						struct work_struct, entry);
+		struct work_struct *nwork = list_first_entry(&pool->worklist,
+					 struct work_struct, entry);
 
-		if (!list_empty(&gcwq->worklist) &&
+		if (!list_empty(&pool->worklist) &&
 		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
 			wake_up_worker(gcwq);
 		else
@@ -1950,7 +1965,8 @@ static void process_scheduled_works(struct worker *worker)
 static int worker_thread(void *__worker)
 {
 	struct worker *worker = __worker;
-	struct global_cwq *gcwq = worker->gcwq;
+	struct worker_pool *pool = worker->pool;
+	struct global_cwq *gcwq = pool->gcwq;
 
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
@@ -1990,7 +2006,7 @@ recheck:
 
 	do {
 		struct work_struct *work =
-			list_first_entry(&gcwq->worklist,
+			list_first_entry(&pool->worklist,
 					 struct work_struct, entry);
 
 		if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -2064,14 +2080,15 @@ repeat:
 	for_each_mayday_cpu(cpu, wq->mayday_mask) {
 		unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
 		struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct worker_pool *pool = cwq->pool;
+		struct global_cwq *gcwq = pool->gcwq;
 		struct work_struct *work, *n;
 
 		__set_current_state(TASK_RUNNING);
 		mayday_clear_cpu(cpu, wq->mayday_mask);
 
 		/* migrate to the target cpu if possible */
-		rescuer->gcwq = gcwq;
+		rescuer->pool = pool;
 		worker_maybe_bind_and_lock(rescuer);
 
 		/*
@@ -2079,7 +2096,7 @@ repeat:
 		 * process'em.
 		 */
 		BUG_ON(!list_empty(&rescuer->scheduled));
-		list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+		list_for_each_entry_safe(work, n, &pool->worklist, entry)
 			if (get_work_cwq(work) == cwq)
 				move_linked_works(work, scheduled, &n);
 
@@ -2216,7 +2233,7 @@ static bool flush_workqueue_prep_cwqs(struct workqueue_struct *wq,
 
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
-		struct global_cwq *gcwq = cwq->gcwq;
+		struct global_cwq *gcwq = cwq->pool->gcwq;
 
 		spin_lock_irq(&gcwq->lock);
 
@@ -2432,9 +2449,9 @@ reflush:
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		bool drained;
 
-		spin_lock_irq(&cwq->gcwq->lock);
+		spin_lock_irq(&cwq->pool->gcwq->lock);
 		drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
-		spin_unlock_irq(&cwq->gcwq->lock);
+		spin_unlock_irq(&cwq->pool->gcwq->lock);
 
 		if (drained)
 			continue;
@@ -2474,7 +2491,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
 		 */
 		smp_rmb();
 		cwq = get_work_cwq(work);
-		if (unlikely(!cwq || gcwq != cwq->gcwq))
+		if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
 			goto already_gone;
 	} else if (wait_executing) {
 		worker = find_worker_executing_work(gcwq, work);
@@ -3017,7 +3034,7 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->gcwq = gcwq;
+		cwq->pool = &gcwq->pool;
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
@@ -3344,7 +3361,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 
 	gcwq->flags |= GCWQ_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->idle_list, entry)
+	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
 		worker->flags |= WORKER_ROGUE;
 
 	for_each_busy_worker(worker, i, pos, gcwq)
@@ -3369,7 +3386,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->idle_timer);
+	del_timer_sync(&gcwq->pool.idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3391,17 +3408,17 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->nr_workers != gcwq->nr_idle ||
+	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
 	       gcwq->flags & GCWQ_FREEZING ||
 	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
 		int nr_works = 0;
 
-		list_for_each_entry(work, &gcwq->worklist, entry) {
+		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
 			send_mayday(work);
 			nr_works++;
 		}
 
-		list_for_each_entry(worker, &gcwq->idle_list, entry) {
+		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
 			if (!nr_works--)
 				break;
 			wake_up_process(worker->task);
@@ -3428,11 +3445,11 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
-		while (!list_empty(&gcwq->idle_list))
-			destroy_worker(list_first_entry(&gcwq->idle_list,
+		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
+		while (!list_empty(&gcwq->pool.idle_list))
+			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
 							struct worker, entry));
-	} while (gcwq->nr_workers && rc >= 0);
+	} while (gcwq->pool.nr_workers && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3441,7 +3458,7 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->idle_list));
+	WARN_ON(!list_empty(&gcwq->pool.idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3522,7 +3539,7 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
+		BUG_ON(gcwq->pool.first_idle);
 		new_worker = create_worker(gcwq, false);
 		if (!new_worker) {
 			if (new_trustee)
@@ -3544,8 +3561,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->first_idle);
-		gcwq->first_idle = new_worker;
+		BUG_ON(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = new_worker;
 		break;
 
 	case CPU_DYING:
@@ -3562,8 +3579,8 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		destroy_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3581,11 +3598,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * take a look.
 		 */
 		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->first_idle->task, cpu);
+		kthread_bind(gcwq->pool.first_idle->task, cpu);
 		spin_lock_irq(&gcwq->lock);
 		gcwq->flags |= GCWQ_MANAGE_WORKERS;
-		start_worker(gcwq->first_idle);
-		gcwq->first_idle = NULL;
+		start_worker(gcwq->pool.first_idle);
+		gcwq->pool.first_idle = NULL;
 		break;
 	}
 
@@ -3794,22 +3811,23 @@ static int __init init_workqueues(void)
 		struct global_cwq *gcwq = get_gcwq(cpu);
 
 		spin_lock_init(&gcwq->lock);
-		INIT_LIST_HEAD(&gcwq->worklist);
+		gcwq->pool.gcwq = gcwq;
+		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->idle_list);
+		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->idle_timer);
-		gcwq->idle_timer.function = idle_worker_timeout;
-		gcwq->idle_timer.data = (unsigned long)gcwq;
+		init_timer_deferrable(&gcwq->pool.idle_timer);
+		gcwq->pool.idle_timer.function = idle_worker_timeout;
+		gcwq->pool.idle_timer.data = (unsigned long)gcwq;
 
-		setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
+		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
 			    (unsigned long)gcwq);
 
-		ida_init(&gcwq->worker_ida);
+		ida_init(&gcwq->pool.worker_ida);
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 21:45           ` Tejun Heo
  (?)
@ 2012-07-12 22:16             ` Tony Luck
  -1 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 22:16 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

[-- Attachment #1: Type: text/plain, Size: 430 bytes --]

On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Patch applied on top of next-20120712 (which still has the same problem).

dmesg output attached

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 23573 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-g1275170 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 15:09:17 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13820k code, 37888k reserved, 5920k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347668
XXX cpu=16 nr_running=0 @ a000000101347670
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1220 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6c4c0>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaab0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad000>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bdc10>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace e9840e0cb994cb82 ]---
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -12 cycles, maxerr 585 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
 sda: sda1
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
 sdb: sdb1 sdb2
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 22:16             ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 22:16 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

[-- Attachment #1: Type: text/plain, Size: 430 bytes --]

On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Patch applied on top of next-20120712 (which still has the same problem).

dmesg output attached

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 23573 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-g1275170 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 15:09:17 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13820k code, 37888k reserved, 5920k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347668
XXX cpu=16 nr_running=0 @ a000000101347670
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1220 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6c4c0>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaab0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad000>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bdc10>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace e9840e0cb994cb82 ]---
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -12 cycles, maxerr 585 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
 sda: sda1
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
 sdb: sdb1 sdb2
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 22:16             ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 22:16 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

[-- Attachment #1: Type: text/plain, Size: 430 bytes --]

On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Patch applied on top of next-20120712 (which still has the same problem).

dmesg output attached

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 23573 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-g1275170 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 15:09:17 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13820k code, 37888k reserved, 5920k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347668
XXX cpu=16 nr_running=0 @ a000000101347670
------------[ cut here ]------------
WARNING: at kernel/workqueue.c:1220 worker_enter_idle+0x2d0/0x4a0()
Modules linked in:

Call Trace:
 [<a0000001000154e0>] show_stack+0x80/0xa0
                                sp=e0000040600f7c30 bsp=e0000040600f0da8
 [<a000000100d6c4c0>] dump_stack+0x30/0x50
                                sp=e0000040600f7e00 bsp=e0000040600f0d90
 [<a0000001000730a0>] warn_slowpath_common+0xc0/0x100
                                sp=e0000040600f7e00 bsp=e0000040600f0d50
 [<a000000100073120>] warn_slowpath_null+0x40/0x60
                                sp=e0000040600f7e00 bsp=e0000040600f0d28
 [<a0000001000aaab0>] worker_enter_idle+0x2d0/0x4a0
                                sp=e0000040600f7e00 bsp=e0000040600f0cf0
 [<a0000001000ad000>] worker_thread+0x4a0/0xbe0
                                sp=e0000040600f7e00 bsp=e0000040600f0c28
 [<a0000001000bdc10>] kthread+0x110/0x140
                                sp=e0000040600f7e00 bsp=e0000040600f0be8
 [<a000000100013510>] kernel_thread_helper+0x30/0x60
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
 [<a00000010000a0c0>] start_kernel_thread+0x20/0x40
                                sp=e0000040600f7e30 bsp=e0000040600f0bc0
---[ end trace e9840e0cb994cb82 ]---
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -12 cycles, maxerr 585 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
 sda: sda1
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
 sdb: sdb1 sdb2
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 22:16             ` Tony Luck
  (?)
@ 2012-07-12 22:32               ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 22:32 UTC (permalink / raw)
  To: Tony Luck
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

Hello, Tony.

On Thu, Jul 12, 2012 at 03:16:30PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> > I was wrong and am now dazed and confused.  That's from
> > init_workqueues() where only cpu0 is running.  How the hell did
> > nr_running manage to become non-zero at that point?  Can you please
> > apply the following patch and report the boot log?  Thank you.
> 
> Patch applied on top of next-20120712 (which still has the same problem).

Can you please try the following debug patch instead?  Yours is
different from Fengguang's.

Thanks a lot!
---
 kernel/workqueue.c |   40 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 4 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -1212,9 +1215,30 @@ static void worker_enter_idle(struct wor
 	 * between setting %WORKER_ROGUE and zapping nr_running, the
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
-	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_pool_nr_running(pool)));
+	if (WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+			 pool->nr_workers == pool->nr_idle &&
+			 atomic_read(get_pool_nr_running(pool)))) {
+		static bool once = false;
+		int cpu;
+
+		if (once)
+			return;
+		once = true;
+
+		printk("XXX nr_running mismatch on gcwq[%d] pool[%ld]\n",
+		       gcwq->cpu, pool - gcwq->pools);
+
+		for_each_gcwq_cpu(cpu) {
+			gcwq = get_gcwq(cpu);
+
+			printk("XXX gcwq[%d] flags=0x%x\n", gcwq->cpu, gcwq->flags);
+			for_each_worker_pool(pool, gcwq)
+				printk("XXX gcwq[%d] pool[%ld] nr_workers=%d nr_idle=%d nr_running=%d\n",
+				       gcwq->cpu, pool - gcwq->pools,
+				       pool->nr_workers, pool->nr_idle,
+				       atomic_read(get_pool_nr_running(pool)));
+		}
+	}
 }
 
 /**
@@ -3855,6 +3879,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3896,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 22:32               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 22:32 UTC (permalink / raw)
  To: Tony Luck
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

Hello, Tony.

On Thu, Jul 12, 2012 at 03:16:30PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> > I was wrong and am now dazed and confused.  That's from
> > init_workqueues() where only cpu0 is running.  How the hell did
> > nr_running manage to become non-zero at that point?  Can you please
> > apply the following patch and report the boot log?  Thank you.
> 
> Patch applied on top of next-20120712 (which still has the same problem).

Can you please try the following debug patch instead?  Yours is
different from Fengguang's.

Thanks a lot!
---
 kernel/workqueue.c |   40 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 4 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -1212,9 +1215,30 @@ static void worker_enter_idle(struct wor
 	 * between setting %WORKER_ROGUE and zapping nr_running, the
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
-	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_pool_nr_running(pool)));
+	if (WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+			 pool->nr_workers == pool->nr_idle &&
+			 atomic_read(get_pool_nr_running(pool)))) {
+		static bool once = false;
+		int cpu;
+
+		if (once)
+			return;
+		once = true;
+
+		printk("XXX nr_running mismatch on gcwq[%d] pool[%ld]\n",
+		       gcwq->cpu, pool - gcwq->pools);
+
+		for_each_gcwq_cpu(cpu) {
+			gcwq = get_gcwq(cpu);
+
+			printk("XXX gcwq[%d] flags=0x%x\n", gcwq->cpu, gcwq->flags);
+			for_each_worker_pool(pool, gcwq)
+				printk("XXX gcwq[%d] pool[%ld] nr_workers=%d nr_idle=%d nr_running=%d\n",
+				       gcwq->cpu, pool - gcwq->pools,
+				       pool->nr_workers, pool->nr_idle,
+				       atomic_read(get_pool_nr_running(pool)));
+		}
+	}
 }
 
 /**
@@ -3855,6 +3879,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3896,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 22:32               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 22:32 UTC (permalink / raw)
  To: Tony Luck
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, Tony.

On Thu, Jul 12, 2012 at 03:16:30PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 2:45 PM, Tejun Heo <tj@kernel.org> wrote:
> > I was wrong and am now dazed and confused.  That's from
> > init_workqueues() where only cpu0 is running.  How the hell did
> > nr_running manage to become non-zero at that point?  Can you please
> > apply the following patch and report the boot log?  Thank you.
> 
> Patch applied on top of next-20120712 (which still has the same problem).

Can you please try the following debug patch instead?  Yours is
different from Fengguang's.

Thanks a lot!
---
 kernel/workqueue.c |   40 ++++++++++++++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 4 deletions(-)

--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -699,8 +699,10 @@ void wq_worker_waking_up(struct task_str
 {
 	struct worker *worker = kthread_data(task);
 
-	if (!(worker->flags & WORKER_NOT_RUNNING))
+	if (!(worker->flags & WORKER_NOT_RUNNING)) {
+		WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 		atomic_inc(get_pool_nr_running(worker->pool));
+	}
 }
 
 /**
@@ -730,6 +732,7 @@ struct task_struct *wq_worker_sleeping(s
 
 	/* this can only happen on the local cpu */
 	BUG_ON(cpu != raw_smp_processor_id());
+	WARN_ON_ONCE(cpu != worker->pool->gcwq->cpu);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -1212,9 +1215,30 @@ static void worker_enter_idle(struct wor
 	 * between setting %WORKER_ROGUE and zapping nr_running, the
 	 * warning may trigger spuriously.  Check iff trustee is idle.
 	 */
-	WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
-		     pool->nr_workers == pool->nr_idle &&
-		     atomic_read(get_pool_nr_running(pool)));
+	if (WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+			 pool->nr_workers == pool->nr_idle &&
+			 atomic_read(get_pool_nr_running(pool)))) {
+		static bool once = false;
+		int cpu;
+
+		if (once)
+			return;
+		once = true;
+
+		printk("XXX nr_running mismatch on gcwq[%d] pool[%ld]\n",
+		       gcwq->cpu, pool - gcwq->pools);
+
+		for_each_gcwq_cpu(cpu) {
+			gcwq = get_gcwq(cpu);
+
+			printk("XXX gcwq[%d] flags=0x%x\n", gcwq->cpu, gcwq->flags);
+			for_each_worker_pool(pool, gcwq)
+				printk("XXX gcwq[%d] pool[%ld] nr_workers=%d nr_idle=%d nr_running=%d\n",
+				       gcwq->cpu, pool - gcwq->pools,
+				       pool->nr_workers, pool->nr_idle,
+				       atomic_read(get_pool_nr_running(pool)));
+		}
+	}
 }
 
 /**
@@ -3855,6 +3879,10 @@ static int __init init_workqueues(void)
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
+		if (cpu != WORK_CPU_UNBOUND)
+			printk("XXX cpu=%d gcwq=%p base=%p\n", cpu, gcwq,
+			       per_cpu_ptr(&pool_nr_running, cpu));
+
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
 			INIT_LIST_HEAD(&pool->worklist);
@@ -3868,6 +3896,10 @@ static int __init init_workqueues(void)
 				    (unsigned long)pool);
 
 			ida_init(&pool->worker_ida);
+
+			printk("XXX cpu=%d nr_running=%d @ %p\n", gcwq->cpu,
+			       atomic_read(get_pool_nr_running(pool)),
+			       get_pool_nr_running(pool));
 		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 22:32               ` Tejun Heo
  (?)
@ 2012-07-12 23:24                 ` Tony Luck
  -1 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:24 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

[-- Attachment #1: Type: text/plain, Size: 311 bytes --]

On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> Can you please try the following debug patch instead?  Yours is
> different from Fengguang's.

New dmesg from mext-20120712 + this new patch (instead of previous one)

[Note - I see some XXX traces, but no WARN_ON stack dump this time]

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 22303 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-gaf0be05 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 16:13:45 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13821k code, 37888k reserved, 5919k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347680
XXX cpu=16 nr_running=0 @ a000000101347688
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -3 cycles, maxerr 579 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
 sda: sda1
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
 sdb: sdb1 sdb2
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:24                 ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:24 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

[-- Attachment #1: Type: text/plain, Size: 311 bytes --]

On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> Can you please try the following debug patch instead?  Yours is
> different from Fengguang's.

New dmesg from mext-20120712 + this new patch (instead of previous one)

[Note - I see some XXX traces, but no WARN_ON stack dump this time]

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 22303 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-gaf0be05 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 16:13:45 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13821k code, 37888k reserved, 5919k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347680
XXX cpu=16 nr_running=0 @ a000000101347688
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -3 cycles, maxerr 579 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
 sda: sda1
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
 sdb: sdb1 sdb2
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:24                 ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:24 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

[-- Attachment #1: Type: text/plain, Size: 311 bytes --]

On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> Can you please try the following debug patch instead?  Yours is
> different from Fengguang's.

New dmesg from mext-20120712 + this new patch (instead of previous one)

[Note - I see some XXX traces, but no WARN_ON stack dump this time]

-Tony

[-- Attachment #2: dmesg.txt --]
[-- Type: text/plain, Size: 22303 bytes --]

Linux version 3.5.0-rc6-zx1-smp-next-20120712-1-gaf0be05 (aegl@linux-bxb1) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP Thu Jul 12 16:13:45 PDT 2012
EFI v1.10 by HP: SALsystab=0x3fefa000 ACPI 2.0=0x3fd5e000 SMBIOS=0x3fefc000 HCDP=0x3fd5c000
Early serial console at MMIO 0xff5e0000 (options '9600')
bootconsole [uart0] enabled
PCDP: v0 at 0x3fd5c000
Explicit "console="; ignoring PCDP
ACPI: RSDP 000000003fd5e000 00028 (v02     HP)
ACPI: XSDT 000000003fd5e02c 00094 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: FACP 000000003fd67390 000F4 (v03     HP   rx2620 00000000   HP 00000000)
ACPI Warning: 32/64X length mismatch in Gpe0Block: 32/16 (20120518/tbfadt-565)
ACPI Warning: 32/64X length mismatch in Gpe1Block: 32/16 (20120518/tbfadt-565)
ACPI: DSDT 000000003fd5e100 05F3C (v01     HP   rx2620 00000007 INTL 02012044)
ACPI: FACS 000000003fd67488 00040
ACPI: SPCR 000000003fd674c8 00050 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: DBGP 000000003fd67518 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: APIC 000000003fd67610 000B0 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SPMI 000000003fd67550 00050 (v04     HP   rx2620 00000000   HP 00000000)
ACPI: CPEP 000000003fd675a0 00034 (v01     HP   rx2620 00000000   HP 00000000)
ACPI: SSDT 000000003fd64040 001D6 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64220 00702 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd64930 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65350 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd65d70 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd66790 00A16 (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd671b0 000EB (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: SSDT 000000003fd672a0 000EF (v01     HP   rx2620 00000006 INTL 02012044)
ACPI: Local APIC address c0000000fee00000
2 CPUs available, 2 CPUs total
warning: skipping physical page 0
Initial ramdisk at: 0xe00000407e9bb000 (6071698 bytes)
SAL 3.1: HP version 3.15
SAL Platform features: None
SAL: AP wakeup using external interrupt vector 0xff
MCA related initialization done
warning: skipping physical page 0
Zone ranges:
  DMA      [mem 0x00004000-0xffffffff]
  Normal   [mem 0x100000000-0x407ffc7fff]
Movable zone start for each node
Early memory node ranges
  node   0: [mem 0x00004000-0x3f4ebfff]
  node   0: [mem 0x3fc00000-0x3fd5bfff]
  node   0: [mem 0x4040000000-0x407fd2bfff]
  node   0: [mem 0x407fd98000-0x407fe07fff]
  node   0: [mem 0x407fe80000-0x407ffc7fff]
On node 0 totalpages: 130378
free_area_init_node: node 0, pgdat a0000001012ee380, node_mem_map a0007fffc7900038
  DMA zone: 896 pages used for memmap
  DMA zone: 0 pages reserved
  DMA zone: 64017 pages, LIFO batch:7
  Normal zone: 56896 pages used for memmap
  Normal zone: 8569 pages, LIFO batch:1
Virtual mem_map starts at 0xa0007fffc7900000
pcpu-alloc: s11392 r8192 d242560 u262144 alloc=16*16384
pcpu-alloc: [0] 0 [0] 1 
Built 1 zonelists in Zone order, mobility grouping off.  Total pages: 72586
Kernel command line: BOOT_IMAGE=scsi0:\efi\SuSE\l-zx1-smp.gz root=/dev/disk/by-id/scsi-200000e1100a5d5f2-part2  console=uart,mmio,0xff5e0000 
PID hash table entries: 4096 (order: 1, 32768 bytes)
Dentry cache hash table entries: 262144 (order: 7, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 6, 1048576 bytes)
Memory: 2048176k/2086064k available (13821k code, 37888k reserved, 5919k data, 848k init)
SLUB: Genslabs=17, HWalign=128, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Hierarchical RCU implementation.
	RCU restricting CPUs from NR_CPUS=16 to nr_cpu_ids=2.
NR_IRQS:768
ACPI: Local APIC address c0000000fee00000
GSI 36 (level, low) -> CPU 0 (0x0000) vector 48
CPU 0: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Console: colour dummy device 80x25
Calibrating delay loop... 1945.60 BogoMIPS (lpj=3891200)
pid_max: default: 32768 minimum: 301
Mount-cache hash table entries: 1024
ACPI: Core revision 20120518
Boot processor id 0x0/0x0
XXX cpu=0 gcwq=e000004040000d80 base=e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002000
XXX cpu=0 nr_running=0 @ e000004040002008
XXX cpu=1 gcwq=e000004040040d80 base=e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042000
XXX cpu=1 nr_running=0 @ e000004040042008
XXX cpu=16 nr_running=0 @ a000000101347680
XXX cpu=16 nr_running=0 @ a000000101347688
Fixed BSP b0 value from CPU 1
CPU 1: synchronized ITC with CPU 0 (last diff -3 cycles, maxerr 579 cycles)
CPU 1: base freq=199.999MHz, ITC ratio=13/2, ITC freq=1299.994MHz+/-650ppm
Brought up 2 CPUs
Total of 2 processors activated (3891.20 BogoMIPS).
DMI 2.3 present.
DMI: hp server rx2620                   , BIOS 03.17                                                            03/31/2005
NET: Registered protocol family 16
ACPI: bus type pci registered
bio: create slab <bio-0> at 0
ACPI: Added _OSI(Module Device)
ACPI: Added _OSI(Processor Device)
ACPI: Added _OSI(3.0 _SCP Extensions)
ACPI: Added _OSI(Processor Aggregator Device)
ACPI: EC: Look up EC in DSDT
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOSAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f])
pci_root HWP0002:00: host bridge window [io  0x0000-0x1fff]
pci_root HWP0002:00: host bridge window [mem 0x80000000-0x8fffffff]
pci_root HWP0002:00: host bridge window [mem 0x80004000000-0x80103fffffe]
PCI host bridge to bus 0000:00
pci_bus 0000:00: busn_res: [bus 00-1f] is inserted under domain [bus 00-ff]
pci_bus 0000:00: root bus resource [bus 00-1f]
pci_bus 0000:00: root bus resource [io  0x0000-0x1fff]
pci_bus 0000:00: root bus resource [mem 0x80000000-0x8fffffff]
pci_bus 0000:00: root bus resource [mem 0x80004000000-0x80103fffffe]
pci 0000:00:01.0: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.0: reg 10: [mem 0x80002000-0x80002fff]
pci 0000:00:01.0: supports D1 D2
pci 0000:00:01.0: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.1: [1033:0035] type 00 class 0x0c0310
pci 0000:00:01.1: reg 10: [mem 0x80001000-0x80001fff]
pci 0000:00:01.1: supports D1 D2
pci 0000:00:01.1: PME# supported from D0 D1 D2 D3hot
pci 0000:00:01.2: [1033:00e0] type 00 class 0x0c0320
pci 0000:00:01.2: reg 10: [mem 0x80000000-0x800000ff]
pci 0000:00:01.2: supports D1 D2
pci 0000:00:01.2: PME# supported from D0 D1 D2 D3hot
pci 0000:00:02.0: [1095:0649] type 00 class 0x01018f
pci 0000:00:02.0: reg 10: [io  0x0d18-0x0d1f]
pci 0000:00:02.0: reg 14: [io  0x0d24-0x0d27]
pci 0000:00:02.0: reg 18: [io  0x0d10-0x0d17]
pci 0000:00:02.0: reg 1c: [io  0x0d20-0x0d23]
pci 0000:00:02.0: reg 20: [io  0x0d00-0x0d0f]
pci 0000:00:02.0: supports D1 D2
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI0._PRT]
 pci0000:00: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f])
pci_root HWP0002:01: host bridge window [mem 0xff5e0000-0xff5e0007]
pci_root HWP0002:01: host bridge window [mem 0xff5e2000-0xff5e2007]
pci_root HWP0002:01: host bridge window [io  0x2000-0x2fff]
pci_root HWP0002:01: host bridge window [mem 0x90000000-0x97ffffff]
pci_root HWP0002:01: host bridge window [mem 0x90004000000-0x90103fffffe]
PCI host bridge to bus 0000:20
pci_bus 0000:20: busn_res: [bus 20-3f] is inserted under domain [bus 00-ff]
pci_bus 0000:20: root bus resource [bus 20-3f]
pci_bus 0000:20: root bus resource [io  0x2000-0x2fff]
pci_bus 0000:20: root bus resource [mem 0x90000000-0x97ffffff]
pci_bus 0000:20: root bus resource [mem 0x90004000000-0x90103fffffe]
pci 0000:20:01.0: [1000:0030] type 00 class 0x010000
pci 0000:20:01.0: reg 10: [io  0x2100-0x21ff]
pci 0000:20:01.0: reg 14: [mem 0x903a0000-0x903bffff 64bit]
pci 0000:20:01.0: reg 1c: [mem 0x90380000-0x9039ffff 64bit]
pci 0000:20:01.0: reg 30: [mem 0x90100000-0x901fffff pref]
pci 0000:20:01.0: supports D1 D2
pci 0000:20:01.1: [1000:0030] type 00 class 0x010000
pci 0000:20:01.1: reg 10: [io  0x2000-0x20ff]
pci 0000:20:01.1: reg 14: [mem 0x90360000-0x9037ffff 64bit]
pci 0000:20:01.1: reg 1c: [mem 0x90340000-0x9035ffff 64bit]
pci 0000:20:01.1: reg 30: [mem 0x90000000-0x900fffff pref]
pci 0000:20:01.1: supports D1 D2
pci 0000:20:02.0: [8086:1079] type 00 class 0x020000
pci 0000:20:02.0: reg 10: [mem 0x90320000-0x9033ffff 64bit]
pci 0000:20:02.0: reg 18: [mem 0x90280000-0x902fffff 64bit]
pci 0000:20:02.0: reg 20: [io  0x2240-0x227f]
pci 0000:20:02.0: reg 30: [mem 0x90200000-0x9027ffff pref]
pci 0000:20:02.0: PME# supported from D0 D3hot D3cold
pci 0000:20:02.1: [8086:1079] type 00 class 0x020000
pci 0000:20:02.1: reg 10: [mem 0x90300000-0x9031ffff 64bit]
pci 0000:20:02.1: reg 20: [io  0x2200-0x223f]
pci 0000:20:02.1: PME# supported from D0 D3hot D3cold
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI1._PRT]
 pci0000:20: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI2] (domain 0000 [bus 40-5f])
pci_root HWP0002:02: host bridge window [io  0x3000-0x5fff]
pci_root HWP0002:02: host bridge window [mem 0x98000000-0xafffffff]
pci_root HWP0002:02: host bridge window [mem 0xa0004000000-0xa0103fffffe]
PCI host bridge to bus 0000:40
pci_bus 0000:40: busn_res: [bus 40-5f] is inserted under domain [bus 00-ff]
pci_bus 0000:40: root bus resource [bus 40-5f]
pci_bus 0000:40: root bus resource [io  0x3000-0x5fff]
pci_bus 0000:40: root bus resource [mem 0x98000000-0xafffffff]
pci_bus 0000:40: root bus resource [mem 0xa0004000000-0xa0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI2._PRT]
 pci0000:40: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 60-7f])
pci_root HWP0002:03: host bridge window [io  0x6000-0x7fff]
pci_root HWP0002:03: host bridge window [mem 0xb0000000-0xc7ffffff]
pci_root HWP0002:03: host bridge window [mem 0xb0004000000-0xb0103fffffe]
PCI host bridge to bus 0000:60
pci_bus 0000:60: busn_res: [bus 60-7f] is inserted under domain [bus 00-ff]
pci_bus 0000:60: root bus resource [bus 60-7f]
pci_bus 0000:60: root bus resource [io  0x6000-0x7fff]
pci_bus 0000:60: root bus resource [mem 0xb0000000-0xc7ffffff]
pci_bus 0000:60: root bus resource [mem 0xb0004000000-0xb0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI3._PRT]
 pci0000:60: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI4] (domain 0000 [bus 80-bf])
pci_root HWP0002:04: host bridge window [io  0x8000-0xbfff]
pci_root HWP0002:04: host bridge window [mem 0xc8000000-0xdfffffff]
pci_root HWP0002:04: host bridge window [mem 0xc0004000000-0xc0103fffffe]
PCI host bridge to bus 0000:80
pci_bus 0000:80: busn_res: [bus 80-bf] is inserted under domain [bus 00-ff]
pci_bus 0000:80: root bus resource [bus 80-bf]
pci_bus 0000:80: root bus resource [io  0x8000-0xbfff]
pci_bus 0000:80: root bus resource [mem 0xc8000000-0xdfffffff]
pci_bus 0000:80: root bus resource [mem 0xc0004000000-0xc0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI4._PRT]
 pci0000:80: Unable to request _OSC control (_OSC support mask: 0x09)
ACPI: PCI Root Bridge [PCI6] (domain 0000 [bus c0-df])
pci_root HWP0002:05: host bridge window [io  0xc000-0xdfff]
pci_root HWP0002:05: host bridge window [mem 0xe0000000-0xefffffff]
pci_root HWP0002:05: host bridge window [mem 0xe0004000000-0xe0103fffffe]
PCI host bridge to bus 0000:c0
pci_bus 0000:c0: busn_res: [bus c0-df] is inserted under domain [bus 00-ff]
pci_bus 0000:c0: root bus resource [bus c0-df]
pci_bus 0000:c0: root bus resource [io  0xc000-0xdfff]
pci_bus 0000:c0: root bus resource [mem 0xe0000000-0xefffffff]
pci_bus 0000:c0: root bus resource [mem 0xe0004000000-0xe0103fffffe]
ACPI: PCI Interrupt Routing Table [\_SB_.SBA0.PCI6._PRT]
 pci0000:c0: Unable to request _OSC control (_OSC support mask: 0x09)
vgaarb: loaded
SCSI subsystem initialized
ACPI: bus type usb registered
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
Advanced Linux Sound Architecture Driver Version 1.0.25.
IOC: zx1 2.3 HPA 0xfed01000 IOVA space 1024Mb at 0x40000000
Switching to clocksource itc
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [mem 0xfed00000-0xfed07fff]
pnp 00:00: Plug and Play ACPI device, IDs HWP0001 PNP0a05 (active)
pnp 00:01: [mem 0xff5b0000-0xff5b0003]
pnp 00:01: Plug and Play ACPI device, IDs IPI0001 (active)
pnp 00:02: [bus 00-1f]
pnp 00:02: [mem 0xfed20000-0xfed21fff]
pnp 00:02: [io  0x0000-0x1fff window]
pnp 00:02: [mem 0x80000000-0x8fffffff window]
pnp 00:02: [mem 0x80004000000-0x80103fffffe window]
pnp 00:02: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:03: [bus 20-3f]
pnp 00:03: [mem 0xff5e0000-0xff5e0007 window]
pnp 00:03: [mem 0xff5e2000-0xff5e2007 window]
pnp 00:03: [mem 0xfed22000-0xfed23fff]
pnp 00:03: [io  0x2000-0x2fff window]
pnp 00:03: [mem 0x90000000-0x97ffffff window]
pnp 00:03: [mem 0x90004000000-0x90103fffffe window]
pnp 00:03: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
GSI 34 (level, low) -> CPU 1 (0x0100) vector 49
pnp 00:04: [irq 49]
pnp 00:04: [mem 0xff5e0000-0xff5e0007]
pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
GSI 35 (level, low) -> CPU 0 (0x0000) vector 50
pnp 00:05: [irq 50]
pnp 00:05: [mem 0xff5e2000-0xff5e2007]
pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active)
pnp 00:06: [bus 40-5f]
pnp 00:06: [mem 0xfed24000-0xfed25fff]
pnp 00:06: [io  0x3000-0x5fff window]
pnp 00:06: [mem 0x98000000-0xafffffff window]
pnp 00:06: [mem 0xa0004000000-0xa0103fffffe window]
pnp 00:06: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:07: [bus 60-7f]
pnp 00:07: [mem 0xfed26000-0xfed27fff]
pnp 00:07: [io  0x6000-0x7fff window]
pnp 00:07: [mem 0xb0000000-0xc7ffffff window]
pnp 00:07: [mem 0xb0004000000-0xb0103fffffe window]
pnp 00:07: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:08: [bus 80-bf]
pnp 00:08: [mem 0xfed28000-0xfed29fff]
pnp 00:08: [io  0x8000-0xbfff window]
pnp 00:08: [mem 0xc8000000-0xdfffffff window]
pnp 00:08: [mem 0xc0004000000-0xc0103fffffe window]
pnp 00:08: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp 00:09: [bus c0-df]
pnp 00:09: [mem 0xfed2c000-0xfed2dfff]
pnp 00:09: [io  0xc000-0xdfff window]
pnp 00:09: [mem 0xe0000000-0xefffffff window]
pnp 00:09: [mem 0xe0004000000-0xe0103fffffe window]
pnp 00:09: Plug and Play ACPI device, IDs HWP0002 PNP0a03 (active)
pnp: PnP ACPI: found 10 devices
ACPI: ACPI bus type pnp unregistered
NET: Registered protocol family 2
IP route cache hash table entries: 16384 (order: 3, 131072 bytes)
TCP established hash table entries: 65536 (order: 6, 1048576 bytes)
TCP bind hash table entries: 65536 (order: 6, 1048576 bytes)
TCP: Hash tables configured (established 65536 bind 65536)
TCP: reno registered
UDP hash table entries: 1024 (order: 1, 32768 bytes)
UDP-Lite hash table entries: 1024 (order: 1, 32768 bytes)
NET: Registered protocol family 1
RPC: Registered named UNIX socket transport module.
RPC: Registered udp transport module.
RPC: Registered tcp transport module.
RPC: Registered tcp NFSv4.1 backchannel transport module.
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51
GSI 16 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51
GSI 17 (level, low) -> CPU 0 (0x0000) vector 51 unregistered
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51
GSI 18 (level, low) -> CPU 1 (0x0100) vector 51 unregistered
PCI: CLS 128 bytes, default 128
Trying to unpack rootfs image as initramfs...
Freeing initrd memory: 5920kB freed
perfmon: version 2.0 IRQ 238
perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits)
PAL Information Facility v0.5
perfmon: added sampling format default_format
perfmon_default_smpl: default_format v2.0 registered
HugeTLB registered 256 MB page size, pre-allocated 0 pages
NFS: Registering the id_resolver key type
Key type id_resolver registered
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
msgmni has been set to 4011
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
io scheduler noop registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
thermal LNXTHERM:00: registered as thermal_zone0
ACPI: Thermal Zone [THM0] (27 C)
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
00:04: ttyS0 at MMIO 0xff5e0000 (irq = 49) is a 16550A
console [ttyS0] enabled, bootconsole disabled
00:05: ttyS1 at MMIO 0xff5e2000 (irq = 50) is a 16550A
EFI Time Services Driver v0.4
Linux agpgart interface v0.103
[drm] Initialized drm 1.1.0 20060810
[drm] radeon defaulting to userspace modesetting.
brd: module loaded
loop: module loaded
Uniform Multi-Platform E-IDE driver
cmd64x 0000:00:02.0: IDE controller (0x1095:0x0649 rev 0x02)
GSI 21 (level, low) -> CPU 0 (0x0000) vector 51
cmd64x 0000:00:02.0: IDE port disabled
cmd64x 0000:00:02.0: 100% native mode on irq 54
    ide0: BM-DMA at 0x0d00-0x0d07
Probing IDE interface ide0...
hda: _NEC DVD+/-RW ND-6650A, ATAPI CD/DVD-ROM drive
hda: host max PIO5 wanted PIO255(auto-tune) selected PIO4
hda: MWDMA2 mode selected
ide0 at 0xd18-0xd1f,0xd26 on irq 54
ide-gd driver 1.18
ide-cd driver 5.00
ide-cd: hda: ATAPI 24X DVD-ROM DVD-R CD-R/RW drive, 2048kB Cache
cdrom: Uniform CD-ROM driver Revision: 3.20
st: Version 20101219, fixed bufsize 32768, s/g segs 256
osst :I: Tape driver with OnStream support version 0.99.4
osst :I: $Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $
e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI
e100: Copyright(c) 1999-2006 Intel Corporation
e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI
e1000: Copyright (c) 1999-2006 Intel Corporation.
GSI 29 (level, low) -> CPU 1 (0x0100) vector 52
e1000 0000:20:02.0: eth0: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9a
e1000 0000:20:02.0: eth0: Intel(R) PRO/1000 Network Connection
GSI 30 (level, low) -> CPU 0 (0x0000) vector 53
e1000 0000:20:02.1: eth1: (PCI-X:66MHz:64-bit) 00:13:21:5b:f6:9b
e1000 0000:20:02.1: eth1: Intel(R) PRO/1000 Network Connection
Fusion MPT base driver 3.04.20
Copyright (c) 1999-2008 LSI Corporation
Fusion MPT SPI Host driver 3.04.20
GSI 27 (level, low) -> CPU 1 (0x0100) vector 54
mptbase: ioc0: Initiating bringup
ioc0: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi0 : ioc0: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=57
scsi 0:0:0:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:0: Beginning Domain Validation
scsi target0:0:0: Ending Domain Validation
scsi target0:0:0: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 0:0:0:0: Attached scsi generic sg0 type 0
sd 0:0:0:0: [sda] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
scsi 0:0:1:0: Direct-Access     HP 36.4G MAU3036NC        HPC2 PQ: 0 ANSI: 3
scsi target0:0:1: Beginning Domain Validation
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: cf 00 10 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA
scsi target0:0:1: Ending Domain Validation
scsi target0:0:1: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
 sda: sda1
sd 0:0:1:0: Attached scsi generic sg1 type 0
sd 0:0:1:0: [sdb] 71132960 512-byte logical blocks: (36.4 GB/33.9 GiB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: cf 00 10 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
sd 0:0:0:0: [sda] Attached SCSI disk
GSI 28 (level, low) -> CPU 0 (0x0000) vector 55
mptbase: ioc1: Initiating bringup
 sdb: sdb1 sdb2
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032341h, Ports=1, MaxQ=255, IRQ=58
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT FC Host driver 3.04.20
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
GSI 18 (level, low) -> CPU 1 (0x0100) vector 56
ehci_hcd 0000:00:01.2: EHCI Host Controller
ehci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
ehci_hcd 0000:00:01.2: irq 53, io mem 0x80000000
ehci_hcd 0000:00:01.2: USB 2.0 started, EHCI 0.95
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 5 ports detected
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
GSI 16 (level, low) -> CPU 0 (0x0000) vector 57
ohci_hcd 0000:00:01.0: OHCI Host Controller
ohci_hcd 0000:00:01.0: new USB bus registered, assigned bus number 2
ohci_hcd 0000:00:01.0: irq 51, io mem 0x80002000
hub 2-0:1.0: USB hub found
hub 2-0:1.0: 3 ports detected
GSI 17 (level, low) -> CPU 1 (0x0100) vector 58
ohci_hcd 0000:00:01.1: OHCI Host Controller
ohci_hcd 0000:00:01.1: new USB bus registered, assigned bus number 3
ohci_hcd 0000:00:01.1: irq 52, io mem 0x80001000
hub 3-0:1.0: USB hub found
hub 3-0:1.0: 2 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
Initializing USB Mass Storage driver...
usbcore: registered new interface driver usb-storage
USB Mass Storage support registered.
mousedev: PS/2 mouse device common for all mice
i2c /dev entries driver
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
TCP: cubic registered
NET: Registered protocol family 17
Key type dns_resolver registered
ALSA device list:
  No soundcards found.
Freeing unused kernel memory: 848kB freed
udevd (136): /proc/136/oom_adj is deprecated, please use /proc/136/oom_score_adj instead.
udevd version 128 started
EXT3-fs (sdb2): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sdb2): using internal journal
EXT3-fs (sdb2): mounted filesystem with ordered data mode
EXT3-fs (sdb2): (no)acl options not supported
udevd version 128 started
Fusion MPT misc device (ioctl) driver 3.04.20
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
EXT3-fs (sda1): (no)acl options not supported
kjournald starting.  Commit interval 5 seconds
EXT3-fs (sda1): using internal journal
EXT3-fs (sda1): mounted filesystem with ordered data mode
e1000: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 23:24                 ` Tony Luck
  (?)
@ 2012-07-12 23:36                   ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 23:36 UTC (permalink / raw)
  To: Tony Luck
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

Hello, Tony.

On Thu, Jul 12, 2012 at 04:24:47PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> > Can you please try the following debug patch instead?  Yours is
> > different from Fengguang's.
> 
> New dmesg from mext-20120712 + this new patch (instead of previous one)
> 
> [Note - I see some XXX traces, but no WARN_ON stack dump this time]

The debug patch didn't do anything for the bug itself.  I suppose it's
timing dependent and doesn't always happen (it never reproduces here
for some reason).  Can you please repeat several times and see whether
the warning can be triggered?

Thank you very much!

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:36                   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 23:36 UTC (permalink / raw)
  To: Tony Luck
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

Hello, Tony.

On Thu, Jul 12, 2012 at 04:24:47PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> > Can you please try the following debug patch instead?  Yours is
> > different from Fengguang's.
> 
> New dmesg from mext-20120712 + this new patch (instead of previous one)
> 
> [Note - I see some XXX traces, but no WARN_ON stack dump this time]

The debug patch didn't do anything for the bug itself.  I suppose it's
timing dependent and doesn't always happen (it never reproduces here
for some reason).  Can you please repeat several times and see whether
the warning can be triggered?

Thank you very much!

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:36                   ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-12 23:36 UTC (permalink / raw)
  To: Tony Luck
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello, Tony.

On Thu, Jul 12, 2012 at 04:24:47PM -0700, Tony Luck wrote:
> On Thu, Jul 12, 2012 at 3:32 PM, Tejun Heo <tj@kernel.org> wrote:
> > Can you please try the following debug patch instead?  Yours is
> > different from Fengguang's.
> 
> New dmesg from mext-20120712 + this new patch (instead of previous one)
> 
> [Note - I see some XXX traces, but no WARN_ON stack dump this time]

The debug patch didn't do anything for the bug itself.  I suppose it's
timing dependent and doesn't always happen (it never reproduces here
for some reason).  Can you please repeat several times and see whether
the warning can be triggered?

Thank you very much!

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 23:36                   ` Tejun Heo
  (?)
@ 2012-07-12 23:46                     ` Tony Luck
  -1 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:46 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

On Thu, Jul 12, 2012 at 4:36 PM, Tejun Heo <tj@kernel.org> wrote:
> The debug patch didn't do anything for the bug itself.  I suppose it's
> timing dependent and doesn't always happen (it never reproduces here
> for some reason).  Can you please repeat several times and see whether
> the warning can be triggered?

Still hasn't come back in three reboots.  I have to leave now, can continue
tomorrow.

-Tony

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:46                     ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:46 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

On Thu, Jul 12, 2012 at 4:36 PM, Tejun Heo <tj@kernel.org> wrote:
> The debug patch didn't do anything for the bug itself.  I suppose it's
> timing dependent and doesn't always happen (it never reproduces here
> for some reason).  Can you please repeat several times and see whether
> the warning can be triggered?

Still hasn't come back in three reboots.  I have to leave now, can continue
tomorrow.

-Tony

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-12 23:46                     ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-12 23:46 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

On Thu, Jul 12, 2012 at 4:36 PM, Tejun Heo <tj@kernel.org> wrote:
> The debug patch didn't do anything for the bug itself.  I suppose it's
> timing dependent and doesn't always happen (it never reproduces here
> for some reason).  Can you please repeat several times and see whether
> the warning can be triggered?

Still hasn't come back in three reboots.  I have to leave now, can continue
tomorrow.

-Tony

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 21:45           ` Tejun Heo
  (?)
@ 2012-07-13  2:08             ` Fengguang Wu
  -1 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-13  2:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

[-- Attachment #1: Type: text/plain, Size: 3837 bytes --]

On Thu, Jul 12, 2012 at 02:45:14PM -0700, Tejun Heo wrote:
> Hello, again.
> 
> On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> > On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > > [    0.207977] Modules linked in:
> > > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > > [    0.207977] Call Trace:
> > > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> > 
> > Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> > nr_running before looking at pool->nr_running.  The warning is
> > spurious.  Will post fix soon.
> 
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Tejun, here is the data I got:

[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED

The attached dmesg is not complete because, once get the oops message,
my script will kill the kvm to save time.

Thanks,
Fengguang

[-- Attachment #2: dmesg-kvm_bisect-waimea-27649-2012-07-13-08-34-35 --]
[-- Type: text/plain, Size: 93870 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-bisect-next-20120712-dirty (wfg@bee) (gcc version 4.7.0 (Debian 4.7.1-1) ) #102 SMP Fri Jul 13 08:32:30 CST 2012
[    0.000000] Command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009f3ff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009f400-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffbc000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000f8860-0x000f886f] mapped at [ffff8800000f8860]
[    0.000000]   mpc: f8870-f898c
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff880000099000] 99000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 128176(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c5fe01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009efff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65420
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3913 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: f8870-f898c
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfffbbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64390
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 200000k/262132k available (4835k kernel code, 452k absent, 61680k reserved, 7751k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 3299.986 MHz processor
[    0.000999] Calibrating delay loop (skipped) preset value.. 6599.97 BogoMIPS (lpj=3299986)
[    0.002008] pid_max: default: 32768 minimum: 301
[    0.003176] Security Framework initialized
[    0.004304] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.006232] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.007245] Mount-cache hash table entries: 256
[    0.010107] Initializing cgroup subsys debug
[    0.010876] Initializing cgroup subsys freezer
[    0.011009] Initializing cgroup subsys perf_event
[    0.012104] Disabled fast string operations
[    0.014242] ftrace: allocating 10983 entries in 43 pages
[    0.020312] Getting VERSION: 50014
[    0.021011] Getting VERSION: 50014
[    0.021605] Getting ID: 0
[    0.022010] Getting ID: ff000000
[    0.022583] Getting LVT0: 8700
[    0.023008] Getting LVT1: 8400
[    0.023589] enabled ExtINT on CPU#0
[    0.025253] ENABLING IO-APIC IRQs
[    0.025839] init IO_APIC IRQs
[    0.026007]  apic 2 pin 0 not connected
[    0.027032] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.028026] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.029033] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.030043] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.031022] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.033031] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.034022] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.036021] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.037028] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.038025] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.040023] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.041019] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.043020] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.044021] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.046005] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.047016]  apic 2 pin 16 not connected
[    0.048002]  apic 2 pin 17 not connected
[    0.048693]  apic 2 pin 18 not connected
[    0.049001]  apic 2 pin 19 not connected
[    0.050001]  apic 2 pin 20 not connected
[    0.050681]  apic 2 pin 21 not connected
[    0.051001]  apic 2 pin 22 not connected
[    0.052001]  apic 2 pin 23 not connected
[    0.052857] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.054000] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.056001] Using local APIC timer interrupts.
[    0.056001] calibrating APIC timer ...
[    0.057995] ... lapic delta = 6248865
[    0.057995] ..... delta 6248865
[    0.057995] ..... mult: 268427509
[    0.057995] ..... calibration result: 999818
[    0.057995] ..... CPU clock speed is 3299.0401 MHz.
[    0.057995] ..... host bus clock speed is 999.0818 MHz.
[    0.057995] ... verify APIC timer
[    0.164423] ... jiffies delta = 100
[    0.164989] ... jiffies result ok
[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED
[    0.179138] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.181221] SMP alternatives: lockdep: fixing up alternatives
[    0.181995] smpboot: Booting Node   0, Processors  #1 OK
[    0.000999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.000999] masked ExtINT on CPU#1
[    0.000999] Disabled fast string operations
[    0.207203] Brought up 2 CPUs
[    0.207732] smpboot: Total of 2 processors activated (13199.94 BogoMIPS)
[    0.209280] CPU0 attaching sched-domain:
[    0.210007]  domain 0: span 0-1 level CPU
[    0.210710]   groups: 0 (cpu_power = 1023) 1
[    0.211440] CPU1 attaching sched-domain:
[    0.211983]  domain 0: span 0-1 level CPU
[    0.212694]   groups: 1 0 (cpu_power = 1023)
[    0.218232] devtmpfs: initialized
[    0.218877] device: 'platform': device_add
[    0.219027] PM: Adding info for No Bus:platform
[    0.220063] bus: 'platform': registered
[    0.221055] bus: 'cpu': registered
[    0.221683] device: 'cpu': device_add
[    0.222014] PM: Adding info for No Bus:cpu
[    0.223020] bus: 'memory': registered
[    0.223985] device: 'memory': device_add
[    0.224670] PM: Adding info for No Bus:memory
[    0.230912] device: 'memory0': device_add
[    0.231006] bus: 'memory': add device memory0
[    0.232066] PM: Adding info for memory:memory0
[    0.233071] device: 'memory1': device_add
[    0.233986] bus: 'memory': add device memory1
[    0.234765] PM: Adding info for memory:memory1
[    0.248722] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.249977] device class 'regulator': registering
[    0.251020] Registering platform device 'reg-dummy'. Parent at platform
[    0.251991] device: 'reg-dummy': device_add
[    0.252985] bus: 'platform': add device reg-dummy
[    0.253848] PM: Adding info for platform:reg-dummy
[    0.260849] bus: 'platform': add driver reg-dummy
[    0.260984] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.262977] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.264070] device: 'regulator.0': device_add
[    0.265133] PM: Adding info for No Bus:regulator.0
[    0.266085] dummy: 
[    0.273208] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.274005] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.275092] RTC time:  0:34:29, date: 07/13/12
[    0.276994] NET: Registered protocol family 16
[    0.277905] device class 'bdi': registering
[    0.278011] device class 'tty': registering
[    0.279013] bus: 'node': registered
[    0.286795] device: 'node': device_add
[    0.287020] PM: Adding info for No Bus:node
[    0.288127] device class 'dma': registering
[    0.289071] device: 'node0': device_add
[    0.289747] bus: 'node': add device node0
[    0.289994] PM: Adding info for node:node0
[    0.291031] device: 'cpu0': device_add
[    0.291977] bus: 'cpu': add device cpu0
[    0.292677] PM: Adding info for cpu:cpu0
[    0.299186] device: 'cpu1': device_add
[    0.299860] bus: 'cpu': add device cpu1
[    0.299992] PM: Adding info for cpu:cpu1
[    0.301007] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.301969] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.302968] mtrr: probably your BIOS does not setup all CPUs.
[    0.303968] mtrr: corrected configuration.
[    0.311821] device: 'default': device_add
[    0.312027] PM: Adding info for No Bus:default
[    0.314526] bio: create slab <bio-0> at 0
[    0.315020] device class 'block': registering
[    0.317769] device class 'misc': registering
[    0.318022] bus: 'serio': registered
[    0.318967] device class 'input': registering
[    0.320006] device class 'power_supply': registering
[    0.320994] device class 'leds': registering
[    0.321795] device class 'net': registering
[    0.322030] device: 'lo': device_add
[    0.323147] PM: Adding info for No Bus:lo
[    0.330653] Switching to clocksource kvm-clock
[    0.332373] Warning: could not register all branches stats
[    0.333365] Warning: could not register annotated branches stats
[    0.413675] device class 'mem': registering
[    0.414493] device: 'mem': device_add
[    0.420754] PM: Adding info for No Bus:mem
[    0.421550] device: 'kmem': device_add
[    0.423861] PM: Adding info for No Bus:kmem
[    0.424642] device: 'null': device_add
[    0.426918] PM: Adding info for No Bus:null
[    0.427694] device: 'zero': device_add
[    0.430025] PM: Adding info for No Bus:zero
[    0.430773] device: 'full': device_add
[    0.433074] PM: Adding info for No Bus:full
[    0.433838] device: 'random': device_add
[    0.436151] PM: Adding info for No Bus:random
[    0.436919] device: 'urandom': device_add
[    0.439276] PM: Adding info for No Bus:urandom
[    0.440100] device: 'kmsg': device_add
[    0.442396] PM: Adding info for No Bus:kmsg
[    0.443148] device: 'tty': device_add
[    0.445317] PM: Adding info for No Bus:tty
[    0.446087] device: 'console': device_add
[    0.448386] PM: Adding info for No Bus:console
[    0.449224] NET: Registered protocol family 1
[    0.450284] Unpacking initramfs...
[    1.877893] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    1.903095] DMA-API: preallocated 32768 debug entries
[    1.903966] DMA-API: debugging enabled by kernel config
[    1.905059] Registering platform device 'rtc_cmos'. Parent at platform
[    1.906178] device: 'rtc_cmos': device_add
[    1.906884] bus: 'platform': add device rtc_cmos
[    1.907727] PM: Adding info for platform:rtc_cmos
[    1.908579] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    1.910170] device: 'snapshot': device_add
[    1.911083] PM: Adding info for No Bus:snapshot
[    1.911949] bus: 'clocksource': registered
[    1.912686] device: 'clocksource': device_add
[    1.913480] PM: Adding info for No Bus:clocksource
[    1.914328] device: 'clocksource0': device_add
[    1.915092] bus: 'clocksource': add device clocksource0
[    1.915985] PM: Adding info for clocksource:clocksource0
[    1.916938] bus: 'platform': add driver alarmtimer
[    1.917799] Registering platform device 'alarmtimer'. Parent at platform
[    1.918948] device: 'alarmtimer': device_add
[    1.919693] bus: 'platform': add device alarmtimer
[    1.920546] PM: Adding info for platform:alarmtimer
[    1.921413] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    1.922931] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    1.924342] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    1.925525] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    1.926945] audit: initializing netlink socket (disabled)
[    1.927924] type=2000 audit(1342139670.926:1): initialized
[    1.941097] Testing tracer function: PASSED
[    2.087999] Testing dynamic ftrace: PASSED
[    2.338209] Testing dynamic ftrace ops #1: (1 0 1 1 0) (1 1 2 1 0) (2 1 3 1 940) (2 2 4 1 1027) PASSED
[    2.431997] Testing dynamic ftrace ops #2: (1 0 1 28 0) (1 1 2 297 0) (2 1 3 1 13) (2 2 4 84 96) PASSED
[    2.540363] bus: 'event_source': registered
[    2.541114] device: 'breakpoint': device_add
[    2.541860] bus: 'event_source': add device breakpoint
[    2.542799] PM: Adding info for event_source:breakpoint
[    2.543767] device: 'tracepoint': device_add
[    2.544535] bus: 'event_source': add device tracepoint
[    2.545493] PM: Adding info for event_source:tracepoint
[    2.546442] device: 'software': device_add
[    2.547170] bus: 'event_source': add device software
[    2.548449] PM: Adding info for event_source:software
[    2.549665] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    2.560548] msgmni has been set to 390
[    2.561843] cryptomgr_test (26) used greatest stack depth: 5736 bytes left
[    2.563190] alg: No test for stdrng (krng)
[    2.564112] device class 'bsg': registering
[    2.564859] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
[    2.566155] io scheduler noop registered (default)
[    2.567035] device: 'ptyp0': device_add
[    2.567860] PM: Adding info for No Bus:ptyp0
[    2.568687] device: 'ptyp1': device_add
[    2.569492] PM: Adding info for No Bus:ptyp1
[    2.570277] device: 'ptyp2': device_add
[    2.571095] PM: Adding info for No Bus:ptyp2
[    2.571873] device: 'ptyp3': device_add
[    2.572693] PM: Adding info for No Bus:ptyp3
[    2.573479] device: 'ptyp4': device_add
[    2.574329] PM: Adding info for No Bus:ptyp4
[    2.575100] device: 'ptyp5': device_add
[    2.575944] PM: Adding info for No Bus:ptyp5
[    2.576723] device: 'ptyp6': device_add
[    2.577525] PM: Adding info for No Bus:ptyp6
[    2.578310] device: 'ptyp7': device_add
[    2.579113] PM: Adding info for No Bus:ptyp7
[    2.579872] device: 'ptyp8': device_add
[    2.580685] PM: Adding info for No Bus:ptyp8
[    2.581469] device: 'ptyp9': device_add
[    2.582286] PM: Adding info for No Bus:ptyp9
[    2.583057] device: 'ptypa': device_add
[    2.583831] PM: Adding info for No Bus:ptypa
[    2.584604] device: 'ptypb': device_add
[    2.585418] PM: Adding info for No Bus:ptypb
[    2.586179] device: 'ptypc': device_add
[    2.586966] PM: Adding info for No Bus:ptypc
[    2.587739] device: 'ptypd': device_add
[    2.588551] PM: Adding info for No Bus:ptypd
[    2.589341] device: 'ptype': device_add
[    2.590252] PM: Adding info for No Bus:ptype
[    2.590998] device: 'ptypf': device_add
[    2.591795] PM: Adding info for No Bus:ptypf
[    2.592572] device: 'ptyq0': device_add
[    2.593400] PM: Adding info for No Bus:ptyq0
[    2.594164] device: 'ptyq1': device_add
[    2.594938] PM: Adding info for No Bus:ptyq1
[    2.595710] device: 'ptyq2': device_add
[    2.596515] PM: Adding info for No Bus:ptyq2
[    2.597330] device: 'ptyq3': device_add
[    2.598162] PM: Adding info for No Bus:ptyq3
[    2.598944] device: 'ptyq4': device_add
[    2.599757] PM: Adding info for No Bus:ptyq4
[    2.600545] device: 'ptyq5': device_add
[    2.601363] PM: Adding info for No Bus:ptyq5
[    2.602125] device: 'ptyq6': device_add
[    2.602949] PM: Adding info for No Bus:ptyq6
[    2.603723] device: 'ptyq7': device_add
[    2.604536] PM: Adding info for No Bus:ptyq7
[    2.605313] device: 'ptyq8': device_add
[    2.606115] PM: Adding info for No Bus:ptyq8
[    2.606877] device: 'ptyq9': device_add
[    2.607708] PM: Adding info for No Bus:ptyq9
[    2.608497] device: 'ptyqa': device_add
[    2.609318] PM: Adding info for No Bus:ptyqa
[    2.610081] device: 'ptyqb': device_add
[    2.610861] PM: Adding info for No Bus:ptyqb
[    2.611631] device: 'ptyqc': device_add
[    2.612444] PM: Adding info for No Bus:ptyqc
[    2.613210] device: 'ptyqd': device_add
[    2.613981] PM: Adding info for No Bus:ptyqd
[    2.614764] device: 'ptyqe': device_add
[    2.615591] PM: Adding info for No Bus:ptyqe
[    2.616375] device: 'ptyqf': device_add
[    2.617165] PM: Adding info for No Bus:ptyqf
[    2.617915] device: 'ptyr0': device_add
[    2.618743] PM: Adding info for No Bus:ptyr0
[    2.619519] device: 'ptyr1': device_add
[    2.620410] PM: Adding info for No Bus:ptyr1
[    2.621175] device: 'ptyr2': device_add
[    2.621952] PM: Adding info for No Bus:ptyr2
[    2.622761] device: 'ptyr3': device_add
[    2.623590] PM: Adding info for No Bus:ptyr3
[    2.624382] device: 'ptyr4': device_add
[    2.625194] PM: Adding info for No Bus:ptyr4
[    2.625964] device: 'ptyr5': device_add
[    2.626783] PM: Adding info for No Bus:ptyr5
[    2.627559] device: 'ptyr6': device_add
[    2.628369] PM: Adding info for No Bus:ptyr6
[    2.629133] device: 'ptyr7': device_add
[    2.629969] PM: Adding info for No Bus:ptyr7
[    2.630741] device: 'ptyr8': device_add
[    2.631555] PM: Adding info for No Bus:ptyr8
[    2.632348] device: 'ptyr9': device_add
[    2.633151] PM: Adding info for No Bus:ptyr9
[    2.633911] device: 'ptyra': device_add
[    2.634730] PM: Adding info for No Bus:ptyra
[    2.635504] device: 'ptyrb': device_add
[    2.636326] PM: Adding info for No Bus:ptyrb
[    2.637086] device: 'ptyrc': device_add
[    2.637887] PM: Adding info for No Bus:ptyrc
[    2.638679] device: 'ptyrd': device_add
[    2.639477] PM: Adding info for No Bus:ptyrd
[    2.640237] device: 'ptyre': device_add
[    2.641060] PM: Adding info for No Bus:ptyre
[    2.641829] device: 'ptyrf': device_add
[    2.642658] PM: Adding info for No Bus:ptyrf
[    2.643433] device: 'ptys0': device_add
[    2.644216] PM: Adding info for No Bus:ptys0
[    2.644963] device: 'ptys1': device_add
[    2.645775] PM: Adding info for No Bus:ptys1
[    2.646550] device: 'ptys2': device_add
[    2.647346] PM: Adding info for No Bus:ptys2
[    2.648134] device: 'ptys3': device_add
[    2.648943] PM: Adding info for No Bus:ptys3
[    2.649735] device: 'ptys4': device_add
[    2.650649] PM: Adding info for No Bus:ptys4
[    2.651445] device: 'ptys5': device_add
[    2.652265] PM: Adding info for No Bus:ptys5
[    2.653031] device: 'ptys6': device_add
[    2.653830] PM: Adding info for No Bus:ptys6
[    2.654604] device: 'ptys7': device_add
[    2.655402] PM: Adding info for No Bus:ptys7
[    2.656162] device: 'ptys8': device_add
[    2.656994] PM: Adding info for No Bus:ptys8
[    2.657777] device: 'ptys9': device_add
[    2.658606] PM: Adding info for No Bus:ptys9
[    2.659397] device: 'ptysa': device_add
[    2.660209] PM: Adding info for No Bus:ptysa
[    2.660961] device: 'ptysb': device_add
[    2.661761] PM: Adding info for No Bus:ptysb
[    2.662534] device: 'ptysc': device_add
[    2.663346] PM: Adding info for No Bus:ptysc
[    2.664106] device: 'ptysd': device_add
[    2.664899] PM: Adding info for No Bus:ptysd
[    2.665672] device: 'ptyse': device_add
[    2.666472] PM: Adding info for No Bus:ptyse
[    2.667259] device: 'ptysf': device_add
[    2.668082] PM: Adding info for No Bus:ptysf
[    2.668851] device: 'ptyt0': device_add
[    2.669657] PM: Adding info for No Bus:ptyt0
[    2.670428] device: 'ptyt1': device_add
[    2.671233] PM: Adding info for No Bus:ptyt1
[    2.671982] device: 'ptyt2': device_add
[    2.672780] PM: Adding info for No Bus:ptyt2
[    2.673586] device: 'ptyt3': device_add
[    2.674402] PM: Adding info for No Bus:ptyt3
[    2.675173] device: 'ptyt4': device_add
[    2.675995] PM: Adding info for No Bus:ptyt4
[    2.676805] device: 'ptyt5': device_add
[    2.677725] PM: Adding info for No Bus:ptyt5
[    2.678507] device: 'ptyt6': device_add
[    2.679343] PM: Adding info for No Bus:ptyt6
[    2.680165] device: 'ptyt7': device_add
[    2.680943] PM: Adding info for No Bus:ptyt7
[    2.681712] device: 'ptyt8': device_add
[    2.682524] PM: Adding info for No Bus:ptyt8
[    2.683294] device: 'ptyt9': device_add
[    2.684136] PM: Adding info for No Bus:ptyt9
[    2.684897] device: 'ptyta': device_add
[    2.685729] PM: Adding info for No Bus:ptyta
[    2.686503] device: 'ptytb': device_add
[    2.687320] PM: Adding info for No Bus:ptytb
[    2.688083] device: 'ptytc': device_add
[    2.688876] PM: Adding info for No Bus:ptytc
[    2.689644] device: 'ptytd': device_add
[    2.690456] PM: Adding info for No Bus:ptytd
[    2.691218] device: 'ptyte': device_add
[    2.691995] PM: Adding info for No Bus:ptyte
[    2.692781] device: 'ptytf': device_add
[    2.693607] PM: Adding info for No Bus:ptytf
[    2.694392] device: 'ptyu0': device_add
[    2.695182] PM: Adding info for No Bus:ptyu0
[    2.695933] device: 'ptyu1': device_add
[    2.696746] PM: Adding info for No Bus:ptyu1
[    2.697516] device: 'ptyu2': device_add
[    2.698371] PM: Adding info for No Bus:ptyu2
[    2.699166] device: 'ptyu3': device_add
[    2.699967] PM: Adding info for No Bus:ptyu3
[    2.700743] device: 'ptyu4': device_add
[    2.701587] PM: Adding info for No Bus:ptyu4
[    2.702392] device: 'ptyu5': device_add
[    2.703192] PM: Adding info for No Bus:ptyu5
[    2.703944] device: 'ptyu6': device_add
[    2.704762] PM: Adding info for No Bus:ptyu6
[    2.705538] device: 'ptyu7': device_add
[    2.706334] PM: Adding info for No Bus:ptyu7
[    2.707093] device: 'ptyu8': device_add
[    2.707894] PM: Adding info for No Bus:ptyu8
[    2.708686] device: 'ptyu9': device_add
[    2.709503] PM: Adding info for No Bus:ptyu9
[    2.710368] device: 'ptyua': device_add
[    2.711209] PM: Adding info for No Bus:ptyua
[    2.711966] device: 'ptyub': device_add
[    2.712873] PM: Adding info for No Bus:ptyub
[    2.713652] device: 'ptyuc': device_add
[    2.714448] PM: Adding info for No Bus:ptyuc
[    2.715210] device: 'ptyud': device_add
[    2.716008] PM: Adding info for No Bus:ptyud
[    2.716780] device: 'ptyue': device_add
[    2.717578] PM: Adding info for No Bus:ptyue
[    2.718367] device: 'ptyuf': device_add
[    2.719187] PM: Adding info for No Bus:ptyuf
[    2.719954] device: 'ptyv0': device_add
[    2.720776] PM: Adding info for No Bus:ptyv0
[    2.721552] device: 'ptyv1': device_add
[    2.722418] PM: Adding info for No Bus:ptyv1
[    2.723180] device: 'ptyv2': device_add
[    2.724095] PM: Adding info for No Bus:ptyv2
[    2.724884] device: 'ptyv3': device_add
[    2.725769] PM: Adding info for No Bus:ptyv3
[    2.726544] device: 'ptyv4': device_add
[    2.727500] PM: Adding info for No Bus:ptyv4
[    2.728325] device: 'ptyv5': device_add
[    2.729140] PM: Adding info for No Bus:ptyv5
[    2.729889] device: 'ptyv6': device_add
[    2.730726] PM: Adding info for No Bus:ptyv6
[    2.731504] device: 'ptyv7': device_add
[    2.732445] PM: Adding info for No Bus:ptyv7
[    2.733206] device: 'ptyv8': device_add
[    2.734081] PM: Adding info for No Bus:ptyv8
[    2.734831] device: 'ptyv9': device_add
[    2.735716] PM: Adding info for No Bus:ptyv9
[    2.736502] device: 'ptyva': device_add
[    2.737435] PM: Adding info for No Bus:ptyva
[    2.738212] device: 'ptyvb': device_add
[    2.739086] PM: Adding info for No Bus:ptyvb
[    2.739837] device: 'ptyvc': device_add
[    2.740723] PM: Adding info for No Bus:ptyvc
[    2.741497] device: 'ptyvd': device_add
[    2.742336] PM: Adding info for No Bus:ptyvd
[    2.743093] device: 'ptyve': device_add
[    2.743890] PM: Adding info for No Bus:ptyve
[    2.744667] device: 'ptyvf': device_add
[    2.745476] PM: Adding info for No Bus:ptyvf
[    2.746265] device: 'ptyw0': device_add
[    2.747090] PM: Adding info for No Bus:ptyw0
[    2.747841] device: 'ptyw1': device_add
[    2.748653] PM: Adding info for No Bus:ptyw1
[    2.749428] device: 'ptyw2': device_add
[    2.750230] PM: Adding info for No Bus:ptyw2
[    2.751033] device: 'ptyw3': device_add
[    2.751816] PM: Adding info for No Bus:ptyw3
[    2.752589] device: 'ptyw4': device_add
[    2.753422] PM: Adding info for No Bus:ptyw4
[    2.754213] device: 'ptyw5': device_add
[    2.755052] PM: Adding info for No Bus:ptyw5
[    2.755813] device: 'ptyw6': device_add
[    2.756721] PM: Adding info for No Bus:ptyw6
[    2.757502] device: 'ptyw7': device_add
[    2.758327] PM: Adding info for No Bus:ptyw7
[    2.759087] device: 'ptyw8': device_add
[    2.759882] PM: Adding info for No Bus:ptyw8
[    2.760655] device: 'ptyw9': device_add
[    2.761472] PM: Adding info for No Bus:ptyw9
[    2.762255] device: 'ptywa': device_add
[    2.763062] PM: Adding info for No Bus:ptywa
[    2.763826] device: 'ptywb': device_add
[    2.764645] PM: Adding info for No Bus:ptywb
[    2.765420] device: 'ptywc': device_add
[    2.766272] PM: Adding info for No Bus:ptywc
[    2.767041] device: 'ptywd': device_add
[    2.767827] PM: Adding info for No Bus:ptywd
[    2.768607] device: 'ptywe': device_add
[    2.769421] PM: Adding info for No Bus:ptywe
[    2.770262] device: 'ptywf': device_add
[    2.771064] PM: Adding info for No Bus:ptywf
[    2.771835] device: 'ptyx0': device_add
[    2.772670] PM: Adding info for No Bus:ptyx0
[    2.773444] device: 'ptyx1': device_add
[    2.774231] PM: Adding info for No Bus:ptyx1
[    2.774978] device: 'ptyx2': device_add
[    2.775811] PM: Adding info for No Bus:ptyx2
[    2.776619] device: 'ptyx3': device_add
[    2.777442] PM: Adding info for No Bus:ptyx3
[    2.778202] device: 'ptyx4': device_add
[    2.779048] PM: Adding info for No Bus:ptyx4
[    2.779823] device: 'ptyx5': device_add
[    2.780653] PM: Adding info for No Bus:ptyx5
[    2.781441] device: 'ptyx6': device_add
[    2.782229] PM: Adding info for No Bus:ptyx6
[    2.782979] device: 'ptyx7': device_add
[    2.783883] PM: Adding info for No Bus:ptyx7
[    2.784659] device: 'ptyx8': device_add
[    2.785541] PM: Adding info for No Bus:ptyx8
[    2.786307] device: 'ptyx9': device_add
[    2.787205] PM: Adding info for No Bus:ptyx9
[    2.787955] device: 'ptyxa': device_add
[    2.788797] PM: Adding info for No Bus:ptyxa
[    2.789596] device: 'ptyxb': device_add
[    2.790419] PM: Adding info for No Bus:ptyxb
[    2.791188] device: 'ptyxc': device_add
[    2.792099] PM: Adding info for No Bus:ptyxc
[    2.792849] device: 'ptyxd': device_add
[    2.793809] PM: Adding info for No Bus:ptyxd
[    2.794582] device: 'ptyxe': device_add
[    2.795471] PM: Adding info for No Bus:ptyxe
[    2.796232] device: 'ptyxf': device_add
[    2.797104] PM: Adding info for No Bus:ptyxf
[    2.797869] device: 'ptyy0': device_add
[    2.798705] PM: Adding info for No Bus:ptyy0
[    2.799486] device: 'ptyy1': device_add
[    2.800389] PM: Adding info for No Bus:ptyy1
[    2.801152] device: 'ptyy2': device_add
[    2.801928] PM: Adding info for No Bus:ptyy2
[    2.802729] device: 'ptyy3': device_add
[    2.803547] PM: Adding info for No Bus:ptyy3
[    2.804321] device: 'ptyy4': device_add
[    2.805132] PM: Adding info for No Bus:ptyy4
[    2.805897] device: 'ptyy5': device_add
[    2.806726] PM: Adding info for No Bus:ptyy5
[    2.807510] device: 'ptyy6': device_add
[    2.808326] PM: Adding info for No Bus:ptyy6
[    2.809083] device: 'ptyy7': device_add
[    2.809890] PM: Adding info for No Bus:ptyy7
[    2.810662] device: 'ptyy8': device_add
[    2.811476] PM: Adding info for No Bus:ptyy8
[    2.812251] device: 'ptyy9': device_add
[    2.813044] PM: Adding info for No Bus:ptyy9
[    2.813794] device: 'ptyya': device_add
[    2.814610] PM: Adding info for No Bus:ptyya
[    2.815401] device: 'ptyyb': device_add
[    2.816204] PM: Adding info for No Bus:ptyyb
[    2.816969] device: 'ptyyc': device_add
[    2.817779] PM: Adding info for No Bus:ptyyc
[    2.818568] device: 'ptyyd': device_add
[    2.819372] PM: Adding info for No Bus:ptyyd
[    2.820130] device: 'ptyye': device_add
[    2.820962] PM: Adding info for No Bus:ptyye
[    2.821735] device: 'ptyyf': device_add
[    2.822548] PM: Adding info for No Bus:ptyyf
[    2.823326] device: 'ptyz0': device_add
[    2.824123] PM: Adding info for No Bus:ptyz0
[    2.824886] device: 'ptyz1': device_add
[    2.825710] PM: Adding info for No Bus:ptyz1
[    2.826488] device: 'ptyz2': device_add
[    2.827283] PM: Adding info for No Bus:ptyz2
[    2.828085] device: 'ptyz3': device_add
[    2.828895] PM: Adding info for No Bus:ptyz3
[    2.829672] device: 'ptyz4': device_add
[    2.830567] PM: Adding info for No Bus:ptyz4
[    2.831354] device: 'ptyz5': device_add
[    2.832178] PM: Adding info for No Bus:ptyz5
[    2.832942] device: 'ptyz6': device_add
[    2.833776] PM: Adding info for No Bus:ptyz6
[    2.834553] device: 'ptyz7': device_add
[    2.835352] PM: Adding info for No Bus:ptyz7
[    2.836114] device: 'ptyz8': device_add
[    2.836906] PM: Adding info for No Bus:ptyz8
[    2.837681] device: 'ptyz9': device_add
[    2.838488] PM: Adding info for No Bus:ptyz9
[    2.839264] device: 'ptyza': device_add
[    2.840073] PM: Adding info for No Bus:ptyza
[    2.840831] device: 'ptyzb': device_add
[    2.841642] PM: Adding info for No Bus:ptyzb
[    2.842430] device: 'ptyzc': device_add
[    2.843238] PM: Adding info for No Bus:ptyzc
[    2.843995] device: 'ptyzd': device_add
[    2.844808] PM: Adding info for No Bus:ptyzd
[    2.845584] device: 'ptyze': device_add
[    2.846381] PM: Adding info for No Bus:ptyze
[    2.847141] device: 'ptyzf': device_add
[    2.847975] PM: Adding info for No Bus:ptyzf
[    2.848761] device: 'ptya0': device_add
[    2.849573] PM: Adding info for No Bus:ptya0
[    2.850360] device: 'ptya1': device_add
[    2.851179] PM: Adding info for No Bus:ptya1
[    2.851930] device: 'ptya2': device_add
[    2.852729] PM: Adding info for No Bus:ptya2
[    2.853533] device: 'ptya3': device_add
[    2.854356] PM: Adding info for No Bus:ptya3
[    2.855119] device: 'ptya4': device_add
[    2.855931] PM: Adding info for No Bus:ptya4
[    2.856721] device: 'ptya5': device_add
[    2.857531] PM: Adding info for No Bus:ptya5
[    2.858325] device: 'ptya6': device_add
[    2.859143] PM: Adding info for No Bus:ptya6
[    2.859994] device: 'ptya7': device_add
[    2.860792] PM: Adding info for No Bus:ptya7
[    2.861559] device: 'ptya8': device_add
[    2.862372] PM: Adding info for No Bus:ptya8
[    2.863136] device: 'ptya9': device_add
[    2.863912] PM: Adding info for No Bus:ptya9
[    2.864687] device: 'ptyaa': device_add
[    2.865502] PM: Adding info for No Bus:ptyaa
[    2.866275] device: 'ptyab': device_add
[    2.867093] PM: Adding info for No Bus:ptyab
[    2.867865] device: 'ptyac': device_add
[    2.868697] PM: Adding info for No Bus:ptyac
[    2.869475] device: 'ptyad': device_add
[    2.870294] PM: Adding info for No Bus:ptyad
[    2.871061] device: 'ptyae': device_add
[    2.871837] PM: Adding info for No Bus:ptyae
[    2.872608] device: 'ptyaf': device_add
[    2.873422] PM: Adding info for No Bus:ptyaf
[    2.874185] device: 'ptyb0': device_add
[    2.875023] PM: Adding info for No Bus:ptyb0
[    2.875784] device: 'ptyb1': device_add
[    2.876610] PM: Adding info for No Bus:ptyb1
[    2.877390] device: 'ptyb2': device_add
[    2.878203] PM: Adding info for No Bus:ptyb2
[    2.878995] device: 'ptyb3': device_add
[    2.879798] PM: Adding info for No Bus:ptyb3
[    2.880568] device: 'ptyb4': device_add
[    2.881406] PM: Adding info for No Bus:ptyb4
[    2.882185] device: 'ptyb5': device_add
[    2.882964] PM: Adding info for No Bus:ptyb5
[    2.883733] device: 'ptyb6': device_add
[    2.884551] PM: Adding info for No Bus:ptyb6
[    2.885342] device: 'ptyb7': device_add
[    2.886148] PM: Adding info for No Bus:ptyb7
[    2.886901] device: 'ptyb8': device_add
[    2.887717] PM: Adding info for No Bus:ptyb8
[    2.888503] device: 'ptyb9': device_add
[    2.889346] PM: Adding info for No Bus:ptyb9
[    2.890200] device: 'ptyba': device_add
[    2.890993] PM: Adding info for No Bus:ptyba
[    2.891770] device: 'ptybb': device_add
[    2.892581] PM: Adding info for No Bus:ptybb
[    2.893365] device: 'ptybc': device_add
[    2.894169] PM: Adding info for No Bus:ptybc
[    2.894937] device: 'ptybd': device_add
[    2.895806] PM: Adding info for No Bus:ptybd
[    2.896580] device: 'ptybe': device_add
[    2.897380] PM: Adding info for No Bus:ptybe
[    2.898142] device: 'ptybf': device_add
[    2.898949] PM: Adding info for No Bus:ptybf
[    2.899727] device: 'ptyc0': device_add
[    2.900538] PM: Adding info for No Bus:ptyc0
[    2.901315] device: 'ptyc1': device_add
[    2.902151] PM: Adding info for No Bus:ptyc1
[    2.902915] device: 'ptyc2': device_add
[    2.903746] PM: Adding info for No Bus:ptyc2
[    2.904553] device: 'ptyc3': device_add
[    2.905354] PM: Adding info for No Bus:ptyc3
[    2.906116] device: 'ptyc4': device_add
[    2.906926] PM: Adding info for No Bus:ptyc4
[    2.907714] device: 'ptyc5': device_add
[    2.908626] PM: Adding info for No Bus:ptyc5
[    2.909401] device: 'ptyc6': device_add
[    2.910205] PM: Adding info for No Bus:ptyc6
[    2.910970] device: 'ptyc7': device_add
[    2.911800] PM: Adding info for No Bus:ptyc7
[    2.912588] device: 'ptyc8': device_add
[    2.913391] PM: Adding info for No Bus:ptyc8
[    2.914150] device: 'ptyc9': device_add
[    2.915065] PM: Adding info for No Bus:ptyc9
[    2.915816] device: 'ptyca': device_add
[    2.916703] PM: Adding info for No Bus:ptyca
[    2.917474] device: 'ptycb': device_add
[    2.918415] PM: Adding info for No Bus:ptycb
[    2.919181] device: 'ptycc': device_add
[    2.919988] PM: Adding info for No Bus:ptycc
[    2.920919] device: 'ptycd': device_add
[    2.921787] PM: Adding info for No Bus:ptycd
[    2.922593] device: 'ptyce': device_add
[    2.923485] PM: Adding info for No Bus:ptyce
[    2.924261] device: 'ptycf': device_add
[    2.925108] PM: Adding info for No Bus:ptycf
[    2.925857] device: 'ptyd0': device_add
[    2.926738] PM: Adding info for No Bus:ptyd0
[    2.927515] device: 'ptyd1': device_add
[    2.928387] PM: Adding info for No Bus:ptyd1
[    2.929159] device: 'ptyd2': device_add
[    2.930059] PM: Adding info for No Bus:ptyd2
[    2.930840] device: 'ptyd3': device_add
[    2.931645] PM: Adding info for No Bus:ptyd3
[    2.932417] device: 'ptyd4': device_add
[    2.933239] PM: Adding info for No Bus:ptyd4
[    2.934032] device: 'ptyd5': device_add
[    2.934827] PM: Adding info for No Bus:ptyd5
[    2.935599] device: 'ptyd6': device_add
[    2.936399] PM: Adding info for No Bus:ptyd6
[    2.937173] device: 'ptyd7': device_add
[    2.937978] PM: Adding info for No Bus:ptyd7
[    2.938784] device: 'ptyd8': device_add
[    2.939587] PM: Adding info for No Bus:ptyd8
[    2.940353] device: 'ptyd9': device_add
[    2.941162] PM: Adding info for No Bus:ptyd9
[    2.941916] device: 'ptyda': device_add
[    2.942716] PM: Adding info for No Bus:ptyda
[    2.943486] device: 'ptydb': device_add
[    2.944309] PM: Adding info for No Bus:ptydb
[    2.945071] device: 'ptydc': device_add
[    2.945877] PM: Adding info for No Bus:ptydc
[    2.946667] device: 'ptydd': device_add
[    2.947478] PM: Adding info for No Bus:ptydd
[    2.948236] device: 'ptyde': device_add
[    2.949061] PM: Adding info for No Bus:ptyde
[    2.949882] device: 'ptydf': device_add
[    2.950680] PM: Adding info for No Bus:ptydf
[    2.951456] device: 'ptye0': device_add
[    2.952271] PM: Adding info for No Bus:ptye0
[    2.953040] device: 'ptye1': device_add
[    2.953819] PM: Adding info for No Bus:ptye1
[    2.954603] device: 'ptye2': device_add
[    2.955431] PM: Adding info for No Bus:ptye2
[    2.956233] device: 'ptye3': device_add
[    2.957093] PM: Adding info for No Bus:ptye3
[    2.957843] device: 'ptye4': device_add
[    2.958674] PM: Adding info for No Bus:ptye4
[    2.959462] device: 'ptye5': device_add
[    2.960274] PM: Adding info for No Bus:ptye5
[    2.961038] device: 'ptye6': device_add
[    2.961817] PM: Adding info for No Bus:ptye6
[    2.962592] device: 'ptye7': device_add
[    2.963423] PM: Adding info for No Bus:ptye7
[    2.964207] device: 'ptye8': device_add
[    2.965002] PM: Adding info for No Bus:ptye8
[    2.965799] device: 'ptye9': device_add
[    2.966631] PM: Adding info for No Bus:ptye9
[    2.967429] device: 'ptyea': device_add
[    2.968275] PM: Adding info for No Bus:ptyea
[    2.969064] device: 'ptyeb': device_add
[    2.969856] PM: Adding info for No Bus:ptyeb
[    2.970645] device: 'ptyec': device_add
[    2.971479] PM: Adding info for No Bus:ptyec
[    2.972274] device: 'ptyed': device_add
[    2.973082] PM: Adding info for No Bus:ptyed
[    2.973846] device: 'ptyee': device_add
[    2.974673] PM: Adding info for No Bus:ptyee
[    2.975449] device: 'ptyef': device_add
[    2.976237] PM: Adding info for No Bus:ptyef
[    2.976991] device: 'ttyp0': device_add
[    2.977809] PM: Adding info for No Bus:ttyp0
[    2.978596] device: 'ttyp1': device_add
[    2.979415] PM: Adding info for No Bus:ttyp1
[    2.980256] device: 'ttyp2': device_add
[    2.981058] PM: Adding info for No Bus:ttyp2
[    2.981854] device: 'ttyp3': device_add
[    2.982690] PM: Adding info for No Bus:ttyp3
[    2.983475] device: 'ttyp4': device_add
[    2.984337] PM: Adding info for No Bus:ttyp4
[    2.985111] device: 'ttyp5': device_add
[    2.985911] PM: Adding info for No Bus:ttyp5
[    2.986683] device: 'ttyp6': device_add
[    2.987515] PM: Adding info for No Bus:ttyp6
[    2.988297] device: 'ttyp7': device_add
[    2.989107] PM: Adding info for No Bus:ttyp7
[    2.989863] device: 'ttyp8': device_add
[    2.990689] PM: Adding info for No Bus:ttyp8
[    2.991492] device: 'ttyp9': device_add
[    2.992300] PM: Adding info for No Bus:ttyp9
[    2.993065] device: 'ttypa': device_add
[    2.993862] PM: Adding info for No Bus:ttypa
[    2.994638] device: 'ttypb': device_add
[    2.995438] PM: Adding info for No Bus:ttypb
[    2.996196] device: 'ttypc': device_add
[    2.996987] PM: Adding info for No Bus:ttypc
[    2.997761] device: 'ttypd': device_add
[    2.998577] PM: Adding info for No Bus:ttypd
[    2.999365] device: 'ttype': device_add
[    3.000187] PM: Adding info for No Bus:ttype
[    3.000939] device: 'ttypf': device_add
[    3.001756] PM: Adding info for No Bus:ttypf
[    3.002533] device: 'ttyq0': device_add
[    3.003348] PM: Adding info for No Bus:ttyq0
[    3.004110] device: 'ttyq1': device_add
[    3.004906] PM: Adding info for No Bus:ttyq1
[    3.005686] device: 'ttyq2': device_add
[    3.006481] PM: Adding info for No Bus:ttyq2
[    3.007293] device: 'ttyq3': device_add
[    3.008117] PM: Adding info for No Bus:ttyq3
[    3.008895] device: 'ttyq4': device_add
[    3.009868] PM: Adding info for No Bus:ttyq4
[    3.010751] device: 'ttyq5': device_add
[    3.011611] PM: Adding info for No Bus:ttyq5
[    3.012386] device: 'ttyq6': device_add
[    3.013190] PM: Adding info for No Bus:ttyq6
[    3.013944] device: 'ttyq7': device_add
[    3.014749] PM: Adding info for No Bus:ttyq7
[    3.015524] device: 'ttyq8': device_add
[    3.016364] PM: Adding info for No Bus:ttyq8
[    3.017146] device: 'ttyq9': device_add
[    3.017939] PM: Adding info for No Bus:ttyq9
[    3.018728] device: 'ttyqa': device_add
[    3.019544] PM: Adding info for No Bus:ttyqa
[    3.020317] device: 'ttyqb': device_add
[    3.021110] PM: Adding info for No Bus:ttyqb
[    3.021863] device: 'ttyqc': device_add
[    3.022680] PM: Adding info for No Bus:ttyqc
[    3.023452] device: 'ttyqd': device_add
[    3.024268] PM: Adding info for No Bus:ttyqd
[    3.025054] device: 'ttyqe': device_add
[    3.025844] PM: Adding info for No Bus:ttyqe
[    3.026625] device: 'ttyqf': device_add
[    3.027537] PM: Adding info for No Bus:ttyqf
[    3.028323] device: 'ttyr0': device_add
[    3.029115] PM: Adding info for No Bus:ttyr0
[    3.029863] device: 'ttyr1': device_add
[    3.030676] PM: Adding info for No Bus:ttyr1
[    3.031451] device: 'ttyr2': device_add
[    3.032239] PM: Adding info for No Bus:ttyr2
[    3.033054] device: 'ttyr3': device_add
[    3.033868] PM: Adding info for No Bus:ttyr3
[    3.034656] device: 'ttyr4': device_add
[    3.035497] PM: Adding info for No Bus:ttyr4
[    3.036290] device: 'ttyr5': device_add
[    3.037081] PM: Adding info for No Bus:ttyr5
[    3.037834] device: 'ttyr6': device_add
[    3.038699] PM: Adding info for No Bus:ttyr6
[    3.039523] device: 'ttyr7': device_add
[    3.040362] PM: Adding info for No Bus:ttyr7
[    3.041124] device: 'ttyr8': device_add
[    3.041923] PM: Adding info for No Bus:ttyr8
[    3.042711] device: 'ttyr9': device_add
[    3.043518] PM: Adding info for No Bus:ttyr9
[    3.044291] device: 'ttyra': device_add
[    3.045106] PM: Adding info for No Bus:ttyra
[    3.045856] device: 'ttyrb': device_add
[    3.046671] PM: Adding info for No Bus:ttyrb
[    3.047448] device: 'ttyrc': device_add
[    3.048237] PM: Adding info for No Bus:ttyrc
[    3.048996] device: 'ttyrd': device_add
[    3.049815] PM: Adding info for No Bus:ttyrd
[    3.050593] device: 'ttyre': device_add
[    3.051407] PM: Adding info for No Bus:ttyre
[    3.052178] device: 'ttyrf': device_add
[    3.052983] PM: Adding info for No Bus:ttyrf
[    3.053760] device: 'ttys0': device_add
[    3.054557] PM: Adding info for No Bus:ttys0
[    3.055331] device: 'ttys1': device_add
[    3.056142] PM: Adding info for No Bus:ttys1
[    3.056894] device: 'ttys2': device_add
[    3.057711] PM: Adding info for No Bus:ttys2
[    3.058530] device: 'ttys3': device_add
[    3.059348] PM: Adding info for No Bus:ttys3
[    3.060126] device: 'ttys4': device_add
[    3.060952] PM: Adding info for No Bus:ttys4
[    3.061761] device: 'ttys5': device_add
[    3.062568] PM: Adding info for No Bus:ttys5
[    3.063340] device: 'ttys6': device_add
[    3.064139] PM: Adding info for No Bus:ttys6
[    3.064892] device: 'ttys7': device_add
[    3.065730] PM: Adding info for No Bus:ttys7
[    3.066504] device: 'ttys8': device_add
[    3.067329] PM: Adding info for No Bus:ttys8
[    3.068094] device: 'ttys9': device_add
[    3.068917] PM: Adding info for No Bus:ttys9
[    3.069715] device: 'ttysa': device_add
[    3.070589] PM: Adding info for No Bus:ttysa
[    3.071363] device: 'ttysb': device_add
[    3.072175] PM: Adding info for No Bus:ttysb
[    3.072925] device: 'ttysc': device_add
[    3.073730] PM: Adding info for No Bus:ttysc
[    3.074501] device: 'ttysd': device_add
[    3.075322] PM: Adding info for No Bus:ttysd
[    3.076085] device: 'ttyse': device_add
[    3.076869] PM: Adding info for No Bus:ttyse
[    3.077658] device: 'ttysf': device_add
[    3.078498] PM: Adding info for No Bus:ttysf
[    3.079273] device: 'ttyt0': device_add
[    3.080082] PM: Adding info for No Bus:ttyt0
[    3.080835] device: 'ttyt1': device_add
[    3.081635] PM: Adding info for No Bus:ttyt1
[    3.082408] device: 'ttyt2': device_add
[    3.083229] PM: Adding info for No Bus:ttyt2
[    3.084032] device: 'ttyt3': device_add
[    3.084841] PM: Adding info for No Bus:ttyt3
[    3.085628] device: 'ttyt4': device_add
[    3.086474] PM: Adding info for No Bus:ttyt4
[    3.087274] device: 'ttyt5': device_add
[    3.088077] PM: Adding info for No Bus:ttyt5
[    3.088844] device: 'ttyt6': device_add
[    3.089663] PM: Adding info for No Bus:ttyt6
[    3.090435] device: 'ttyt7': device_add
[    3.091239] PM: Adding info for No Bus:ttyt7
[    3.091992] device: 'ttyt8': device_add
[    3.092834] PM: Adding info for No Bus:ttyt8
[    3.093608] device: 'ttyt9': device_add
[    3.094436] PM: Adding info for No Bus:ttyt9
[    3.095213] device: 'ttyta': device_add
[    3.096035] PM: Adding info for No Bus:ttyta
[    3.096784] device: 'ttytb': device_add
[    3.097604] PM: Adding info for No Bus:ttytb
[    3.098391] device: 'ttytc': device_add
[    3.099178] PM: Adding info for No Bus:ttytc
[    3.099925] device: 'ttytd': device_add
[    3.100815] PM: Adding info for No Bus:ttytd
[    3.101592] device: 'ttyte': device_add
[    3.102408] PM: Adding info for No Bus:ttyte
[    3.103184] device: 'ttytf': device_add
[    3.103981] PM: Adding info for No Bus:ttytf
[    3.104771] device: 'ttyu0': device_add
[    3.105592] PM: Adding info for No Bus:ttyu0
[    3.106370] device: 'ttyu1': device_add
[    3.107163] PM: Adding info for No Bus:ttyu1
[    3.107913] device: 'ttyu2': device_add
[    3.108743] PM: Adding info for No Bus:ttyu2
[    3.109558] device: 'ttyu3': device_add
[    3.110363] PM: Adding info for No Bus:ttyu3
[    3.111125] device: 'ttyu4': device_add
[    3.111951] PM: Adding info for No Bus:ttyu4
[    3.112754] device: 'ttyu5': device_add
[    3.113589] PM: Adding info for No Bus:ttyu5
[    3.114364] device: 'ttyu6': device_add
[    3.115157] PM: Adding info for No Bus:ttyu6
[    3.115905] device: 'ttyu7': device_add
[    3.116726] PM: Adding info for No Bus:ttyu7
[    3.117499] device: 'ttyu8': device_add
[    3.118324] PM: Adding info for No Bus:ttyu8
[    3.119085] device: 'ttyu9': device_add
[    3.119935] PM: Adding info for No Bus:ttyu9
[    3.120725] device: 'ttyua': device_add
[    3.121548] PM: Adding info for No Bus:ttyua
[    3.122324] device: 'ttyub': device_add
[    3.123128] PM: Adding info for No Bus:ttyub
[    3.123878] device: 'ttyuc': device_add
[    3.124693] PM: Adding info for No Bus:ttyuc
[    3.125467] device: 'ttyud': device_add
[    3.126270] PM: Adding info for No Bus:ttyud
[    3.127030] device: 'ttyue': device_add
[    3.127826] PM: Adding info for No Bus:ttyue
[    3.128616] device: 'ttyuf': device_add
[    3.129428] PM: Adding info for No Bus:ttyuf
[    3.130259] device: 'ttyv0': device_add
[    3.131072] PM: Adding info for No Bus:ttyv0
[    3.131833] device: 'ttyv1': device_add
[    3.132640] PM: Adding info for No Bus:ttyv1
[    3.133411] device: 'ttyv2': device_add
[    3.134216] PM: Adding info for No Bus:ttyv2
[    3.134993] device: 'ttyv3': device_add
[    3.135811] PM: Adding info for No Bus:ttyv3
[    3.136586] device: 'ttyv4': device_add
[    3.137405] PM: Adding info for No Bus:ttyv4
[    3.138199] device: 'ttyv5': device_add
[    3.139049] PM: Adding info for No Bus:ttyv5
[    3.139809] device: 'ttyv6': device_add
[    3.140609] PM: Adding info for No Bus:ttyv6
[    3.141380] device: 'ttyv7': device_add
[    3.142184] PM: Adding info for No Bus:ttyv7
[    3.142935] device: 'ttyv8': device_add
[    3.143734] PM: Adding info for No Bus:ttyv8
[    3.144503] device: 'ttyv9': device_add
[    3.145346] PM: Adding info for No Bus:ttyv9
[    3.146113] device: 'ttyva': device_add
[    3.146969] PM: Adding info for No Bus:ttyva
[    3.147764] device: 'ttyvb': device_add
[    3.148581] PM: Adding info for No Bus:ttyvb
[    3.149354] device: 'ttyvc': device_add
[    3.150167] PM: Adding info for No Bus:ttyvc
[    3.150921] device: 'ttyvd': device_add
[    3.151720] PM: Adding info for No Bus:ttyvd
[    3.152487] device: 'ttyve': device_add
[    3.153305] PM: Adding info for No Bus:ttyve
[    3.154068] device: 'ttyvf': device_add
[    3.154853] PM: Adding info for No Bus:ttyvf
[    3.155640] device: 'ttyw0': device_add
[    3.156463] PM: Adding info for No Bus:ttyw0
[    3.157228] device: 'ttyw1': device_add
[    3.158047] PM: Adding info for No Bus:ttyw1
[    3.158811] device: 'ttyw2': device_add
[    3.159612] PM: Adding info for No Bus:ttyw2
[    3.160482] device: 'ttyw3': device_add
[    3.161305] PM: Adding info for No Bus:ttyw3
[    3.162068] device: 'ttyw4': device_add
[    3.162865] PM: Adding info for No Bus:ttyw4
[    3.163662] device: 'ttyw5': device_add
[    3.164492] PM: Adding info for No Bus:ttyw5
[    3.165281] device: 'ttyw6': device_add
[    3.166075] PM: Adding info for No Bus:ttyw6
[    3.166822] device: 'ttyw7': device_add
[    3.167636] PM: Adding info for No Bus:ttyw7
[    3.168423] device: 'ttyw8': device_add
[    3.169224] PM: Adding info for No Bus:ttyw8
[    3.169973] device: 'ttyw9': device_add
[    3.170771] PM: Adding info for No Bus:ttyw9
[    3.171542] device: 'ttywa': device_add
[    3.172363] PM: Adding info for No Bus:ttywa
[    3.173138] device: 'ttywb': device_add
[    3.173969] PM: Adding info for No Bus:ttywb
[    3.174743] device: 'ttywc': device_add
[    3.175560] PM: Adding info for No Bus:ttywc
[    3.176335] device: 'ttywd': device_add
[    3.177122] PM: Adding info for No Bus:ttywd
[    3.177869] device: 'ttywe': device_add
[    3.178728] PM: Adding info for No Bus:ttywe
[    3.179501] device: 'ttywf': device_add
[    3.180324] PM: Adding info for No Bus:ttywf
[    3.181097] device: 'ttyx0': device_add
[    3.181886] PM: Adding info for No Bus:ttyx0
[    3.182669] device: 'ttyx1': device_add
[    3.183486] PM: Adding info for No Bus:ttyx1
[    3.184260] device: 'ttyx2': device_add
[    3.185075] PM: Adding info for No Bus:ttyx2
[    3.185852] device: 'ttyx3': device_add
[    3.186673] PM: Adding info for No Bus:ttyx3
[    3.187448] device: 'ttyx4': device_add
[    3.188278] PM: Adding info for No Bus:ttyx4
[    3.189057] device: 'ttyx5': device_add
[    3.189860] PM: Adding info for No Bus:ttyx5
[    3.190728] device: 'ttyx6': device_add
[    3.191556] PM: Adding info for No Bus:ttyx6
[    3.192328] device: 'ttyx7': device_add
[    3.193118] PM: Adding info for No Bus:ttyx7
[    3.193864] device: 'ttyx8': device_add
[    3.194678] PM: Adding info for No Bus:ttyx8
[    3.195450] device: 'ttyx9': device_add
[    3.196237] PM: Adding info for No Bus:ttyx9
[    3.196990] device: 'ttyxa': device_add
[    3.197806] PM: Adding info for No Bus:ttyxa
[    3.198602] device: 'ttyxb': device_add
[    3.199416] PM: Adding info for No Bus:ttyxb
[    3.200181] device: 'ttyxc': device_add
[    3.201041] PM: Adding info for No Bus:ttyxc
[    3.201799] device: 'ttyxd': device_add
[    3.202616] PM: Adding info for No Bus:ttyxd
[    3.203391] device: 'ttyxe': device_add
[    3.204178] PM: Adding info for No Bus:ttyxe
[    3.204924] device: 'ttyxf': device_add
[    3.205742] PM: Adding info for No Bus:ttyxf
[    3.206517] device: 'ttyy0': device_add
[    3.207337] PM: Adding info for No Bus:ttyy0
[    3.208110] device: 'ttyy1': device_add
[    3.208922] PM: Adding info for No Bus:ttyy1
[    3.209699] device: 'ttyy2': device_add
[    3.210500] PM: Adding info for No Bus:ttyy2
[    3.211301] device: 'ttyy3': device_add
[    3.212108] PM: Adding info for No Bus:ttyy3
[    3.212857] device: 'ttyy4': device_add
[    3.213688] PM: Adding info for No Bus:ttyy4
[    3.214477] device: 'ttyy5': device_add
[    3.215284] PM: Adding info for No Bus:ttyy5
[    3.216061] device: 'ttyy6': device_add
[    3.216874] PM: Adding info for No Bus:ttyy6
[    3.217663] device: 'ttyy7': device_add
[    3.218473] PM: Adding info for No Bus:ttyy7
[    3.219230] device: 'ttyy8': device_add
[    3.220127] PM: Adding info for No Bus:ttyy8
[    3.220874] device: 'ttyy9': device_add
[    3.221673] PM: Adding info for No Bus:ttyy9
[    3.222442] device: 'ttyya': device_add
[    3.223257] PM: Adding info for No Bus:ttyya
[    3.224007] device: 'ttyyb': device_add
[    3.224836] PM: Adding info for No Bus:ttyyb
[    3.225632] device: 'ttyyc': device_add
[    3.226435] PM: Adding info for No Bus:ttyyc
[    3.227195] device: 'ttyyd': device_add
[    3.228054] PM: Adding info for No Bus:ttyyd
[    3.228816] device: 'ttyye': device_add
[    3.229618] PM: Adding info for No Bus:ttyye
[    3.230390] device: 'ttyyf': device_add
[    3.231192] PM: Adding info for No Bus:ttyyf
[    3.231942] device: 'ttyz0': device_add
[    3.232747] PM: Adding info for No Bus:ttyz0
[    3.233533] device: 'ttyz1': device_add
[    3.234356] PM: Adding info for No Bus:ttyz1
[    3.235127] device: 'ttyz2': device_add
[    3.235925] PM: Adding info for No Bus:ttyz2
[    3.236729] device: 'ttyz3': device_add
[    3.237534] PM: Adding info for No Bus:ttyz3
[    3.238318] device: 'ttyz4': device_add
[    3.239141] PM: Adding info for No Bus:ttyz4
[    3.239909] device: 'ttyz5': device_add
[    3.240714] PM: Adding info for No Bus:ttyz5
[    3.241490] device: 'ttyz6': device_add
[    3.242330] PM: Adding info for No Bus:ttyz6
[    3.243105] device: 'ttyz7': device_add
[    3.243887] PM: Adding info for No Bus:ttyz7
[    3.244662] device: 'ttyz8': device_add
[    3.245479] PM: Adding info for No Bus:ttyz8
[    3.246252] device: 'ttyz9': device_add
[    3.247062] PM: Adding info for No Bus:ttyz9
[    3.247833] device: 'ttyza': device_add
[    3.248654] PM: Adding info for No Bus:ttyza
[    3.249430] device: 'ttyzb': device_add
[    3.250350] PM: Adding info for No Bus:ttyzb
[    3.251129] device: 'ttyzc': device_add
[    3.251924] PM: Adding info for No Bus:ttyzc
[    3.252764] device: 'ttyzd': device_add
[    3.253575] PM: Adding info for No Bus:ttyzd
[    3.254348] device: 'ttyze': device_add
[    3.255179] PM: Adding info for No Bus:ttyze
[    3.255925] device: 'ttyzf': device_add
[    3.256742] PM: Adding info for No Bus:ttyzf
[    3.257515] device: 'ttya0': device_add
[    3.258340] PM: Adding info for No Bus:ttya0
[    3.259116] device: 'ttya1': device_add
[    3.259921] PM: Adding info for No Bus:ttya1
[    3.260709] device: 'ttya2': device_add
[    3.261648] PM: Adding info for No Bus:ttya2
[    3.262544] device: 'ttya3': device_add
[    3.263339] PM: Adding info for No Bus:ttya3
[    3.264096] device: 'ttya4': device_add
[    3.264905] PM: Adding info for No Bus:ttya4
[    3.265696] device: 'ttya5': device_add
[    3.266503] PM: Adding info for No Bus:ttya5
[    3.267272] device: 'ttya6': device_add
[    3.268087] PM: Adding info for No Bus:ttya6
[    3.268868] device: 'ttya7': device_add
[    3.269703] PM: Adding info for No Bus:ttya7
[    3.270478] device: 'ttya8': device_add
[    3.271276] PM: Adding info for No Bus:ttya8
[    3.272052] device: 'ttya9': device_add
[    3.272845] PM: Adding info for No Bus:ttya9
[    3.273623] device: 'ttyaa': device_add
[    3.274436] PM: Adding info for No Bus:ttyaa
[    3.275196] device: 'ttyab': device_add
[    3.275993] PM: Adding info for No Bus:ttyab
[    3.276789] device: 'ttyac': device_add
[    3.277602] PM: Adding info for No Bus:ttyac
[    3.278396] device: 'ttyad': device_add
[    3.279199] PM: Adding info for No Bus:ttyad
[    3.280051] device: 'ttyae': device_add
[    3.280842] PM: Adding info for No Bus:ttyae
[    3.281620] device: 'ttyaf': device_add
[    3.282459] PM: Adding info for No Bus:ttyaf
[    3.283221] device: 'ttyb0': device_add
[    3.284050] PM: Adding info for No Bus:ttyb0
[    3.284810] device: 'ttyb1': device_add
[    3.285620] PM: Adding info for No Bus:ttyb1
[    3.286407] device: 'ttyb2': device_add
[    3.287211] PM: Adding info for No Bus:ttyb2
[    3.287992] device: 'ttyb3': device_add
[    3.288811] PM: Adding info for No Bus:ttyb3
[    3.289586] device: 'ttyb4': device_add
[    3.290418] PM: Adding info for No Bus:ttyb4
[    3.291193] device: 'ttyb5': device_add
[    3.291993] PM: Adding info for No Bus:ttyb5
[    3.292768] device: 'ttyb6': device_add
[    3.293573] PM: Adding info for No Bus:ttyb6
[    3.294355] device: 'ttyb7': device_add
[    3.295176] PM: Adding info for No Bus:ttyb7
[    3.295928] device: 'ttyb8': device_add
[    3.296729] PM: Adding info for No Bus:ttyb8
[    3.297501] device: 'ttyb9': device_add
[    3.298313] PM: Adding info for No Bus:ttyb9
[    3.299074] device: 'ttyba': device_add
[    3.299866] PM: Adding info for No Bus:ttyba
[    3.300641] device: 'ttybb': device_add
[    3.301461] PM: Adding info for No Bus:ttybb
[    3.302233] device: 'ttybc': device_add
[    3.303068] PM: Adding info for No Bus:ttybc
[    3.303835] device: 'ttybd': device_add
[    3.304728] PM: Adding info for No Bus:ttybd
[    3.305502] device: 'ttybe': device_add
[    3.306321] PM: Adding info for No Bus:ttybe
[    3.307083] device: 'ttybf': device_add
[    3.307856] PM: Adding info for No Bus:ttybf
[    3.308640] device: 'ttyc0': device_add
[    3.309499] PM: Adding info for No Bus:ttyc0
[    3.310355] device: 'ttyc1': device_add
[    3.311156] PM: Adding info for No Bus:ttyc1
[    3.311918] device: 'ttyc2': device_add
[    3.312748] PM: Adding info for No Bus:ttyc2
[    3.313556] device: 'ttyc3': device_add
[    3.314375] PM: Adding info for No Bus:ttyc3
[    3.315139] device: 'ttyc4': device_add
[    3.315940] PM: Adding info for No Bus:ttyc4
[    3.316729] device: 'ttyc5': device_add
[    3.317548] PM: Adding info for No Bus:ttyc5
[    3.318336] device: 'ttyc6': device_add
[    3.319122] PM: Adding info for No Bus:ttyc6
[    3.319886] device: 'ttyc7': device_add
[    3.320716] PM: Adding info for No Bus:ttyc7
[    3.321504] device: 'ttyc8': device_add
[    3.322457] PM: Adding info for No Bus:ttyc8
[    3.323219] device: 'ttyc9': device_add
[    3.324042] PM: Adding info for No Bus:ttyc9
[    3.324789] device: 'ttyca': device_add
[    3.325603] PM: Adding info for No Bus:ttyca
[    3.326379] device: 'ttycb': device_add
[    3.327170] PM: Adding info for No Bus:ttycb
[    3.327920] device: 'ttycc': device_add
[    3.328761] PM: Adding info for No Bus:ttycc
[    3.329549] device: 'ttycd': device_add
[    3.330363] PM: Adding info for No Bus:ttycd
[    3.331123] device: 'ttyce': device_add
[    3.331922] PM: Adding info for No Bus:ttyce
[    3.332702] device: 'ttycf': device_add
[    3.333503] PM: Adding info for No Bus:ttycf
[    3.334271] device: 'ttyd0': device_add
[    3.335081] PM: Adding info for No Bus:ttyd0
[    3.335829] device: 'ttyd1': device_add
[    3.336684] PM: Adding info for No Bus:ttyd1
[    3.337470] device: 'ttyd2': device_add
[    3.338300] PM: Adding info for No Bus:ttyd2
[    3.339105] device: 'ttyd3': device_add
[    3.339995] PM: Adding info for No Bus:ttyd3
[    3.340855] device: 'ttyd4': device_add
[    3.341674] PM: Adding info for No Bus:ttyd4
[    3.342472] device: 'ttyd5': device_add
[    3.343288] PM: Adding info for No Bus:ttyd5
[    3.344059] device: 'ttyd6': device_add
[    3.344840] PM: Adding info for No Bus:ttyd6
[    3.345615] device: 'ttyd7': device_add
[    3.346447] PM: Adding info for No Bus:ttyd7
[    3.347222] device: 'ttyd8': device_add
[    3.348047] PM: Adding info for No Bus:ttyd8
[    3.348813] device: 'ttyd9': device_add
[    3.349614] PM: Adding info for No Bus:ttyd9
[    3.350387] device: 'ttyda': device_add
[    3.351193] PM: Adding info for No Bus:ttyda
[    3.351943] device: 'ttydb': device_add
[    3.352741] PM: Adding info for No Bus:ttydb
[    3.353513] device: 'ttydc': device_add
[    3.354341] PM: Adding info for No Bus:ttydc
[    3.355115] device: 'ttydd': device_add
[    3.355904] PM: Adding info for No Bus:ttydd
[    3.356689] device: 'ttyde': device_add
[    3.357507] PM: Adding info for No Bus:ttyde
[    3.358294] device: 'ttydf': device_add
[    3.359094] PM: Adding info for No Bus:ttydf
[    3.359847] device: 'ttye0': device_add
[    3.360646] PM: Adding info for No Bus:ttye0
[    3.361419] device: 'ttye1': device_add
[    3.362223] PM: Adding info for No Bus:ttye1
[    3.362981] device: 'ttye2': device_add
[    3.363838] PM: Adding info for No Bus:ttye2
[    3.364654] device: 'ttye3': device_add
[    3.365489] PM: Adding info for No Bus:ttye3
[    3.366263] device: 'ttye4': device_add
[    3.367075] PM: Adding info for No Bus:ttye4
[    3.367841] device: 'ttye5': device_add
[    3.368679] PM: Adding info for No Bus:ttye5
[    3.369456] device: 'ttye6': device_add
[    3.370333] PM: Adding info for No Bus:ttye6
[    3.371096] device: 'ttye7': device_add
[    3.371877] PM: Adding info for No Bus:ttye7
[    3.372661] device: 'ttye8': device_add
[    3.373488] PM: Adding info for No Bus:ttye8
[    3.374270] device: 'ttye9': device_add
[    3.375069] PM: Adding info for No Bus:ttye9
[    3.375820] device: 'ttyea': device_add
[    3.376636] PM: Adding info for No Bus:ttyea
[    3.377409] device: 'ttyeb': device_add
[    3.378198] PM: Adding info for No Bus:ttyeb
[    3.378959] device: 'ttyec': device_add
[    3.379776] PM: Adding info for No Bus:ttyec
[    3.380548] device: 'ttyed': device_add
[    3.381374] PM: Adding info for No Bus:ttyed
[    3.382157] device: 'ttyee': device_add
[    3.382944] PM: Adding info for No Bus:ttyee
[    3.383718] device: 'ttyef': device_add
[    3.384533] PM: Adding info for No Bus:ttyef
[    3.385305] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    3.386419] Registering platform device 'serial8250'. Parent at platform
[    3.387574] device: 'serial8250': device_add
[    3.388345] bus: 'platform': add device serial8250
[    3.389203] PM: Adding info for platform:serial8250
[    3.414529] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    3.415600] device: 'ttyS0': device_add
[    3.416501] PM: Adding info for No Bus:ttyS0
[    3.417378] device: 'ttyS1': device_add
[    3.418315] PM: Adding info for No Bus:ttyS1
[    3.419133] device: 'ttyS2': device_add
[    3.419926] PM: Adding info for No Bus:ttyS2
[    3.420806] device: 'ttyS3': device_add
[    3.421680] PM: Adding info for No Bus:ttyS3
[    3.422475] bus: 'platform': add driver serial8250
[    3.423329] bus: 'platform': driver_probe_device: matched device serial8250 with driver serial8250
[    3.424864] bus: 'platform': really_probe: probing driver serial8250 with device serial8250
[    3.426333] driver: 'serial8250': driver_bound: bound to device 'serial8250'
[    3.427551] bus: 'platform': really_probe: bound device serial8250 to driver serial8250
[    3.428975] device: 'ttyprintk': device_add
[    3.429964] PM: Adding info for No Bus:ttyprintk
[    3.430795] bus: 'platform': add driver tpm_tis
[    3.431616] Registering platform device 'tpm_tis'. Parent at platform
[    3.432728] device: 'tpm_tis': device_add
[    3.433455] bus: 'platform': add device tpm_tis
[    3.434306] PM: Adding info for platform:tpm_tis
[    3.435137] bus: 'platform': driver_probe_device: matched device tpm_tis with driver tpm_tis
[    3.436587] bus: 'platform': really_probe: probing driver tpm_tis with device tpm_tis
[    3.437930] driver: 'tpm_tis': driver_bound: bound to device 'tpm_tis'
[    3.439067] bus: 'platform': really_probe: bound device tpm_tis to driver tpm_tis
[    3.440362] device: 'tpm0': device_add
[    3.441156] PM: Adding info for No Bus:tpm0
[    4.195061] device: 'tpm0': device_unregister
[    4.195834] PM: Removing info for No Bus:tpm0
[    4.197167] device: 'tpm0': device_create_release
[    4.198234] PM: Removing info for platform:tpm_tis
[    4.199181] bus: 'platform': remove device tpm_tis
[    4.200169] bus: 'platform': remove driver tpm_tis
[    4.201056] driver: 'tpm_tis': driver_release
[    4.201866] Registering platform device 'i8042'. Parent at platform
[    4.202958] device: 'i8042': device_add
[    4.203650] bus: 'platform': add device i8042
[    4.204445] PM: Adding info for platform:i8042
[    4.205233] bus: 'platform': add driver i8042
[    4.205989] bus: 'platform': driver_probe_device: matched device i8042 with driver i8042
[    4.207383] bus: 'platform': really_probe: probing driver i8042 with device i8042
[    4.209696] serio: i8042 KBD port at 0x60,0x64 irq 1
[    4.210710] serio: i8042 AUX port at 0x60,0x64 irq 12
[    4.211694] device: 'serio0': device_add
[    4.212434] bus: 'serio': add device serio0
[    4.213211] PM: Adding info for serio:serio0
[    4.214044] driver: 'i8042': driver_bound: bound to device 'i8042'
[    4.215125] device: 'serio1': device_add
[    4.215818] bus: 'serio': add device serio1
[    4.216637] PM: Adding info for serio:serio1
[    4.217436] bus: 'platform': really_probe: bound device i8042 to driver i8042
[    4.218699] bus: 'serio': add driver atkbd
[    4.219484] cpuidle: using governor ladder
[    4.220333] 
[    4.220333] printing PIC contents
[    4.221167] ... PIC  IMR: fffb
[    4.221702] ... PIC  IRR: 1013
[    4.222263] ... PIC  ISR: 0000
[    4.222790] ... PIC ELCR: 0c00
[    4.223345] printing local APIC contents on CPU#0/0:
[    4.224185] ... APIC ID:      00000000 (0)
[    4.224329] ... APIC VERSION: 00050014
[    4.224329] ... APIC TASKPRI: 00000000 (00)
[    4.224329] ... APIC PROCPRI: 00000000
[    4.224329] ... APIC LDR: 01000000
[    4.224329] ... APIC DFR: ffffffff
[    4.224329] ... APIC SPIV: 000001ff
[    4.224329] ... APIC ISR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC TMR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC IRR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000020008000
[    4.224329] ... APIC ESR: 00000000
[    4.224329] ... APIC ICR: 00000841
[    4.224329] ... APIC ICR2: 01000000
[    4.224329] ... APIC LVTT: 000000ef
[    4.224329] ... APIC LVTPC: 00010000
[    4.224329] ... APIC LVT0: 00010700
[    4.224329] ... APIC LVT1: 00000400
[    4.224329] ... APIC LVTERR: 000000fe
[    4.224329] ... APIC TMICT: 0000a2d2
[    4.224329] ... APIC TMCCT: 00000000
[    4.224329] ... APIC TDCR: 00000003
[    4.224329] 
[    4.241632] number of MP IRQ sources: 20.
[    4.242350] number of IO-APIC #2 registers: 24.
[    4.243145] testing the IO APIC.......................
[    4.244064] IO APIC #2......
[    4.244565] .... register #00: 00000000
[    4.245234] .......    : physical APIC id: 00
[    4.245976] .......    : Delivery Type: 0
[    4.246686] .......    : LTS          : 0
[    4.247398] .... register #01: 00170011
[    4.248068] .......     : max redirection entries: 17
[    4.248936] .......     : PRQ implemented: 0
[    4.249687] .......     : IO APIC version: 11
[    4.250456] .... register #02: 00000000
[    4.251139] .......     : arbitration: 00
[    4.251841] .... IRQ redirection table:
[    4.252600]  NR Dst Mask Trig IRR Pol Stat Dmod Deli Vect:
[    4.253557]  00 00  1    0    0   0   0    0    0    00
[    4.254494]  01 03  0    0    0   0   0    1    1    41
[    4.255445]  02 03  0    0    0   0   0    1    1    51
[    4.256380]  03 01  0    0    0   0   0    1    1    61
[    4.257321]  04 01  1    0    0   0   0    1    1    71
[    4.258269]  05 01  0    0    0   0   0    1    1    81
[    4.259201]  06 01  0    0    0   0   0    1    1    91
[    4.260156]  07 01  0    0    0   0   0    1    1    A1
[    4.261116]  08 01  0    0    0   0   0    1    1    B1
[    4.262064]  09 03  1    1    0   0   0    1    1    C1
[    4.263075]  0a 03  1    1    0   0   0    1    1    D1
[    4.263993]  0b 03  1    1    0   0   0    1    1    E1
[    4.264927]  0c 03  0    0    0   0   0    1    1    22
[    4.265864]  0d 01  0    0    0   0   0    1    1    42
[    4.266798]  0e 01  0    0    0   0   0    1    1    52
[    4.267738]  0f 01  0    0    0   0   0    1    1    62
[    4.268703]  10 00  1    0    0   0   0    0    0    00
[    4.269711]  11 00  1    0    0   0   0    0    0    00
[    4.270679]  12 00  1    0    0   0   0    0    0    00
[    4.271623]  13 00  1    0    0   0   0    0    0    00
[    4.272560]  14 00  1    0    0   0   0    0    0    00
[    4.273498]  15 00  1    0    0   0   0    0    0    00
[    4.274437]  16 00  1    0    0   0   0    0    0    00
[    4.275374]  17 00  1    0    0   0   0    0    0    00
[    4.276302] IRQ to pin mappings:
[    4.276861] IRQ0 -> 0:2
[    4.277369] IRQ1 -> 0:1
[    4.277846] IRQ3 -> 0:3
[    4.278368] IRQ4 -> 0:4
[    4.278840] IRQ5 -> 0:5
[    4.279341] IRQ6 -> 0:6
[    4.279806] IRQ7 -> 0:7
[    4.280307] IRQ8 -> 0:8
[    4.280772] IRQ12 -> 0:12
[    4.281299] IRQ13 -> 0:13
[    4.281794] IRQ14 -> 0:14
[    4.282323] IRQ15 -> 0:15
[    4.282818] IRQ33 -> 0:9
[    4.283330] IRQ34 -> 0:10
[    4.283821] IRQ35 -> 0:11
[    4.284346] .................................... done.
[    4.285272] bus: 'serio': driver_probe_device: matched device serio0 with driver atkbd
[    4.285342] device: 'cpu_dma_latency': device_add
[    4.285428] PM: Adding info for No Bus:cpu_dma_latency
[    4.285464] device: 'network_latency': device_add
[    4.285544] PM: Adding info for No Bus:network_latency
[    4.285575] device: 'network_throughput': device_add
[    4.285639] PM: Adding info for No Bus:network_throughput
[    4.285682] PM: Hibernation image not present or could not be loaded.
[    4.285721] registered taskstats version 1
[    4.285723] Running tests on trace events:
[    4.285725] Testing event kfree_skb: [    4.294208] bus: 'serio': really_probe: probing driver atkbd with device serio0
[    4.297195] device: 'input0': device_add
[    4.298042] PM: Adding info for No Bus:input0
[    4.298925] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    4.300637] driver: 'serio0': driver_bound: bound to device 'atkbd'
[    4.300706] Testing event consume_skb: OK
[    4.302376] bus: 'serio': really_probe: bound device serio0 to driver atkbd
[    4.303686] bus: 'serio': driver_probe_device: matched device serio1 with driver atkbd
[    4.305078] bus: 'serio': really_probe: probing driver atkbd with device serio1
[    4.306670] atkbd: probe of serio1 rejects match -19
[    4.308159] OK
[    4.308516] Testing event skb_copy_datagram_iovec: OK
[    4.313332] Testing event net_dev_xmit: OK
[    4.318324] Testing event net_dev_queue: OK
[    4.323321] Testing event netif_receive_skb: OK
[    4.328338] Testing event netif_rx: OK
[    4.333307] Testing event napi_poll: OK
[    4.338312] Testing event sock_rcvqueue_full: OK
[    4.343327] Testing event sock_exceed_buf_limit: OK
[    4.348306] Testing event udp_fail_queue_rcv_skb: OK
[    4.353292] Testing event regmap_reg_write: OK
[    4.358307] Testing event regmap_reg_read: OK
[    4.363288] Testing event regmap_reg_read_cache: OK
[    4.368310] Testing event regmap_hw_read_start: OK
[    4.373288] Testing event regmap_hw_read_done: OK
[    4.378311] Testing event regmap_hw_write_start: OK
[    4.383292] Testing event regmap_hw_write_done: OK
[    4.388300] Testing event regcache_sync: OK
[    4.393289] Testing event regmap_cache_only: OK
[    4.398338] Testing event regmap_cache_bypass: OK
[    4.403288] Testing event mix_pool_bytes: OK
[    4.408307] Testing event mix_pool_bytes_nolock: OK
[    4.413289] Testing event credit_entropy_bits: OK
[    4.418306] Testing event get_random_bytes: OK
[    4.423309] Testing event extract_entropy: OK
[    4.428309] Testing event extract_entropy_user: OK
[    4.433289] Testing event regulator_enable: OK
[    4.438303] Testing event regulator_enable_delay: OK
[    4.443323] Testing event regulator_enable_complete: OK
[    4.448298] Testing event regulator_disable: OK
[    4.453291] Testing event regulator_disable_complete: OK
[    4.458307] Testing event regulator_set_voltage: OK
[    4.463288] Testing event regulator_set_voltage_complete: OK
[    4.468304] Testing event gpio_direction: OK
[    4.473295] Testing event gpio_value: OK
[    4.478304] Testing event block_rq_abort: OK
[    4.483238] Testing event block_rq_requeue: OK
[    4.488339] Testing event block_rq_complete: OK
[    4.493294] Testing event block_rq_insert: OK
[    4.498307] Testing event block_rq_issue: OK
[    4.503303] Testing event block_bio_bounce: OK
[    4.508298] Testing event block_bio_complete: OK
[    4.513292] Testing event block_bio_backmerge: OK
[    4.518301] Testing event block_bio_frontmerge: OK
[    4.523291] Testing event block_bio_queue: OK
[    4.528303] Testing event block_getrq: OK
[    4.533329] Testing event block_sleeprq: OK
[    4.538301] Testing event block_plug: OK
[    4.543289] Testing event block_unplug: OK
[    4.548309] Testing event block_split: OK
[    4.553295] Testing event block_bio_remap: OK
[    4.558301] Testing event block_rq_remap: OK
[    4.563298] Testing event writeback_nothread: OK
[    4.568301] Testing event writeback_queue: OK
[    4.573290] Testing event writeback_exec: OK
[    4.578330] Testing event writeback_start: OK
[    4.583291] Testing event writeback_written: OK
[    4.588303] Testing event writeback_wait: OK
[    4.593292] Testing event writeback_pages_written: OK
[    4.598237] Testing event writeback_nowork: OK
[    4.603288] Testing event writeback_wake_background: OK
[    4.608306] Testing event writeback_wake_thread: OK
[    4.613301] Testing event writeback_wake_forker_thread: OK
[    4.618306] Testing event writeback_bdi_register: OK
[    4.623290] Testing event writeback_bdi_unregister: OK
[    4.628303] Testing event writeback_thread_start: OK
[    4.633329] Testing event writeback_thread_stop: OK
[    4.638304] Testing event wbc_writepage: OK
[    4.643290] Testing event writeback_queue_io: OK
[    4.648306] Testing event global_dirty_state: OK
[    4.653237] Testing event bdi_dirty_ratelimit: OK
[    4.658295] Testing event balance_dirty_pages: OK
[    4.663256] Testing event writeback_sb_inodes_requeue: OK
[    4.668272] Testing event writeback_congestion_wait: OK
[    4.673256] Testing event writeback_wait_iff_congested: OK
[    4.678306] Testing event writeback_single_inode: OK
[    4.683271] Testing event mm_compaction_isolate_migratepages: OK
[    4.688266] Testing event mm_compaction_isolate_freepages: OK
[    4.693258] Testing event mm_compaction_migratepages: OK
[    4.698274] Testing event kmalloc: OK
[    4.703263] Testing event kmem_cache_alloc: OK
[    4.708277] Testing event kmalloc_node: OK
[    4.713254] Testing event kmem_cache_alloc_node: OK
[    4.718264] Testing event kfree: OK
[    4.722270] Testing event kmem_cache_free: OK
[    4.727261] Testing event mm_page_free: OK
[    4.732307] Testing event mm_page_free_batched: OK
[    4.737256] Testing event mm_page_alloc: OK
[    4.742272] Testing event mm_page_alloc_zone_locked: OK
[    4.747257] Testing event mm_page_pcpu_drain: OK
[    4.752254] Testing event mm_page_alloc_extfrag: OK
[    4.757256] Testing event mm_vmscan_kswapd_sleep: OK
[    4.762257] Testing event mm_vmscan_kswapd_wake: OK
[    4.767263] Testing event mm_vmscan_wakeup_kswapd: OK
[    4.772256] Testing event mm_vmscan_direct_reclaim_begin: OK
[    4.777293] Testing event mm_vmscan_memcg_reclaim_begin: OK
[    4.782256] Testing event mm_vmscan_memcg_softlimit_reclaim_begin: OK
[    4.787260] Testing event mm_vmscan_direct_reclaim_end: OK
[    4.792254] Testing event mm_vmscan_memcg_reclaim_end: OK
[    4.797258] Testing event mm_vmscan_memcg_softlimit_reclaim_end: OK
[    4.802258] Testing event mm_shrink_slab_start: OK
[    4.807254] Testing event mm_shrink_slab_end: OK
[    4.812267] Testing event mm_vmscan_lru_isolate: OK
[    4.817256] Testing event mm_vmscan_memcg_isolate: OK
[    4.822294] Testing event mm_vmscan_writepage: OK
[    4.827257] Testing event mm_vmscan_lru_shrink_inactive: OK
[    4.832255] Testing event oom_score_adj_update: OK
[    4.837272] Testing event rpm_suspend: OK
[    4.842266] Testing event rpm_resume: OK
[    4.847254] Testing event rpm_idle: OK
[    4.852275] Testing event rpm_return_int: OK
[    4.857259] Testing event cpu_idle: OK
[    4.862274] Testing event cpu_frequency: OK
[    4.867257] Testing event machine_suspend: OK
[    4.872277] Testing event wakeup_source_activate: OK
[    4.877254] Testing event wakeup_source_deactivate: OK
[    4.882258] Testing event clock_enable: OK
[    4.887259] Testing event clock_disable: OK
[    4.892270] Testing event clock_set_rate: OK
[    4.897258] Testing event power_domain_target: OK
[    4.902257] Testing event ftrace_test_filter: OK
[    4.907300] Testing event module_load: OK
[    4.912275] Testing event module_free: OK
[    4.917497] Testing event module_request: OK
[    4.923568] Testing event lock_acquire: OK
[    4.928486] Testing event lock_release: OK
[    4.933310] Testing event sched_kthread_stop: OK
[    4.938267] Testing event sched_kthread_stop_ret: OK
[    4.943258] Testing event sched_wakeup: OK
[    4.948373] Testing event sched_wakeup_new: OK
[    4.953258] Testing event sched_switch: OK
[    4.958273] Testing event sched_migrate_task: OK
[    4.963253] Testing event sched_process_free: OK
[    4.968266] Testing event sched_process_exit: OK
[    4.973265] Testing event sched_wait_task: OK
[    4.978267] Testing event sched_process_wait: OK
[    4.983255] Testing event sched_process_fork: OK
[    4.988270] Testing event sched_process_exec: OK
[    4.993294] Testing event sched_stat_wait: OK
[    4.998277] Testing event sched_stat_sleep: OK
[    5.003261] Testing event sched_stat_iowait: OK
[    5.008266] Testing event sched_stat_blocked: OK
[    5.013261] Testing event sched_stat_runtime: OK
[    5.018276] Testing event sched_pi_setprio: OK
[    5.023255] Testing event rcu_utilization: OK
[    5.028279] Testing event rcu_grace_period: OK
[    5.033260] Testing event rcu_grace_period_init: OK
[    5.038307] Testing event rcu_preempt_task: OK
[    5.043266] Testing event rcu_unlock_preempted_task: OK
[    5.048267] Testing event rcu_quiescent_state_report: OK
[    5.053265] Testing event rcu_fqs: OK
[    5.058272] Testing event rcu_dyntick: OK
[    5.063270] Testing event rcu_prep_idle: OK
[    5.068278] Testing event rcu_callback: OK
[    5.073260] Testing event rcu_kfree_callback: OK
[    5.078266] Testing event rcu_batch_start: OK
[    5.083292] Testing event rcu_invoke_callback: OK
[    5.088274] Testing event rcu_invoke_kfree_callback: OK
[    5.093259] Testing event rcu_batch_end: OK
[    5.098278] Testing event rcu_torture_read: OK
[    5.103267] Testing event rcu_barrier: OK
[    5.108276] Testing event workqueue_queue_work: OK
[    5.113252] Testing event workqueue_activate_work: OK
[    5.118272] Testing event workqueue_execute_start: OK
[    5.123257] Testing event workqueue_execute_end: OK
[    5.128281] Testing event signal_generate: OK
[    5.133256] Testing event signal_deliver: OK
[    5.138276] Testing event timer_init: OK
[    5.143260] Testing event timer_start: OK
[    5.148265] Testing event timer_expire_entry: OK
[    5.153257] Testing event timer_expire_exit: OK
[    5.158277] Testing event timer_cancel: OK
[    5.163264] Testing event hrtimer_init: OK
[    5.168277] Testing event hrtimer_start: OK
[    5.173309] Testing event hrtimer_expire_entry: OK
[    5.178271] Testing event hrtimer_expire_exit: OK
[    5.183301] Testing event hrtimer_cancel: OK
[    5.188284] Testing event itimer_state: OK
[    5.193257] Testing event itimer_expire: OK
[    5.198282] Testing event irq_handler_entry: OK
[    5.203257] Testing event irq_handler_exit: OK
[    5.208266] Testing event softirq_entry: OK
[    5.213257] Testing event softirq_exit: OK
[    5.218277] Testing event softirq_raise: OK
[    5.223259] Testing event console: OK
[    5.228311] Testing event task_newtask: OK
[    5.233255] Testing event task_rename: OK
[    5.238268] Testing event sys_enter: OK
[    5.243258] Testing event sys_exit: OK
[    5.248272] Testing event emulate_vsyscall: OK
[    5.253276] Running tests on trace event systems:
[    5.254168] Testing event system skb: OK
[    5.259478] Testing event system net: OK
[    5.264351] Testing event system napi: OK
[    5.269296] Testing event system sock: OK
[    5.274296] Testing event system udp: OK
[    5.279472] Testing event system regmap: OK
[    5.284388] Testing event system random: OK
[    5.289346] Testing event system regulator: OK
[    5.294352] Testing event system gpio: OK
[    5.299290] Testing event system block: OK
[    5.304487] Testing event system writeback: OK
[    5.309652] Testing event system compaction: 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-13  2:08             ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-13  2:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

[-- Attachment #1: Type: text/plain, Size: 3837 bytes --]

On Thu, Jul 12, 2012 at 02:45:14PM -0700, Tejun Heo wrote:
> Hello, again.
> 
> On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> > On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > > [    0.207977] Modules linked in:
> > > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > > [    0.207977] Call Trace:
> > > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> > 
> > Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> > nr_running before looking at pool->nr_running.  The warning is
> > spurious.  Will post fix soon.
> 
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Tejun, here is the data I got:

[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED

The attached dmesg is not complete because, once get the oops message,
my script will kill the kvm to save time.

Thanks,
Fengguang

[-- Attachment #2: dmesg-kvm_bisect-waimea-27649-2012-07-13-08-34-35 --]
[-- Type: text/plain, Size: 93870 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-bisect-next-20120712-dirty (wfg@bee) (gcc version 4.7.0 (Debian 4.7.1-1) ) #102 SMP Fri Jul 13 08:32:30 CST 2012
[    0.000000] Command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009f3ff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009f400-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffbc000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000f8860-0x000f886f] mapped at [ffff8800000f8860]
[    0.000000]   mpc: f8870-f898c
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff880000099000] 99000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 128176(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c5fe01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009efff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65420
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3913 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: f8870-f898c
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfffbbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64390
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 200000k/262132k available (4835k kernel code, 452k absent, 61680k reserved, 7751k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 3299.986 MHz processor
[    0.000999] Calibrating delay loop (skipped) preset value.. 6599.97 BogoMIPS (lpj=3299986)
[    0.002008] pid_max: default: 32768 minimum: 301
[    0.003176] Security Framework initialized
[    0.004304] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.006232] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.007245] Mount-cache hash table entries: 256
[    0.010107] Initializing cgroup subsys debug
[    0.010876] Initializing cgroup subsys freezer
[    0.011009] Initializing cgroup subsys perf_event
[    0.012104] Disabled fast string operations
[    0.014242] ftrace: allocating 10983 entries in 43 pages
[    0.020312] Getting VERSION: 50014
[    0.021011] Getting VERSION: 50014
[    0.021605] Getting ID: 0
[    0.022010] Getting ID: ff000000
[    0.022583] Getting LVT0: 8700
[    0.023008] Getting LVT1: 8400
[    0.023589] enabled ExtINT on CPU#0
[    0.025253] ENABLING IO-APIC IRQs
[    0.025839] init IO_APIC IRQs
[    0.026007]  apic 2 pin 0 not connected
[    0.027032] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.028026] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.029033] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.030043] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.031022] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.033031] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.034022] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.036021] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.037028] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.038025] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.040023] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.041019] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.043020] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.044021] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.046005] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.047016]  apic 2 pin 16 not connected
[    0.048002]  apic 2 pin 17 not connected
[    0.048693]  apic 2 pin 18 not connected
[    0.049001]  apic 2 pin 19 not connected
[    0.050001]  apic 2 pin 20 not connected
[    0.050681]  apic 2 pin 21 not connected
[    0.051001]  apic 2 pin 22 not connected
[    0.052001]  apic 2 pin 23 not connected
[    0.052857] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.054000] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.056001] Using local APIC timer interrupts.
[    0.056001] calibrating APIC timer ...
[    0.057995] ... lapic delta = 6248865
[    0.057995] ..... delta 6248865
[    0.057995] ..... mult: 268427509
[    0.057995] ..... calibration result: 999818
[    0.057995] ..... CPU clock speed is 3299.0401 MHz.
[    0.057995] ..... host bus clock speed is 999.0818 MHz.
[    0.057995] ... verify APIC timer
[    0.164423] ... jiffies delta = 100
[    0.164989] ... jiffies result ok
[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED
[    0.179138] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.181221] SMP alternatives: lockdep: fixing up alternatives
[    0.181995] smpboot: Booting Node   0, Processors  #1 OK
[    0.000999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.000999] masked ExtINT on CPU#1
[    0.000999] Disabled fast string operations
[    0.207203] Brought up 2 CPUs
[    0.207732] smpboot: Total of 2 processors activated (13199.94 BogoMIPS)
[    0.209280] CPU0 attaching sched-domain:
[    0.210007]  domain 0: span 0-1 level CPU
[    0.210710]   groups: 0 (cpu_power = 1023) 1
[    0.211440] CPU1 attaching sched-domain:
[    0.211983]  domain 0: span 0-1 level CPU
[    0.212694]   groups: 1 0 (cpu_power = 1023)
[    0.218232] devtmpfs: initialized
[    0.218877] device: 'platform': device_add
[    0.219027] PM: Adding info for No Bus:platform
[    0.220063] bus: 'platform': registered
[    0.221055] bus: 'cpu': registered
[    0.221683] device: 'cpu': device_add
[    0.222014] PM: Adding info for No Bus:cpu
[    0.223020] bus: 'memory': registered
[    0.223985] device: 'memory': device_add
[    0.224670] PM: Adding info for No Bus:memory
[    0.230912] device: 'memory0': device_add
[    0.231006] bus: 'memory': add device memory0
[    0.232066] PM: Adding info for memory:memory0
[    0.233071] device: 'memory1': device_add
[    0.233986] bus: 'memory': add device memory1
[    0.234765] PM: Adding info for memory:memory1
[    0.248722] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.249977] device class 'regulator': registering
[    0.251020] Registering platform device 'reg-dummy'. Parent at platform
[    0.251991] device: 'reg-dummy': device_add
[    0.252985] bus: 'platform': add device reg-dummy
[    0.253848] PM: Adding info for platform:reg-dummy
[    0.260849] bus: 'platform': add driver reg-dummy
[    0.260984] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.262977] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.264070] device: 'regulator.0': device_add
[    0.265133] PM: Adding info for No Bus:regulator.0
[    0.266085] dummy: 
[    0.273208] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.274005] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.275092] RTC time:  0:34:29, date: 07/13/12
[    0.276994] NET: Registered protocol family 16
[    0.277905] device class 'bdi': registering
[    0.278011] device class 'tty': registering
[    0.279013] bus: 'node': registered
[    0.286795] device: 'node': device_add
[    0.287020] PM: Adding info for No Bus:node
[    0.288127] device class 'dma': registering
[    0.289071] device: 'node0': device_add
[    0.289747] bus: 'node': add device node0
[    0.289994] PM: Adding info for node:node0
[    0.291031] device: 'cpu0': device_add
[    0.291977] bus: 'cpu': add device cpu0
[    0.292677] PM: Adding info for cpu:cpu0
[    0.299186] device: 'cpu1': device_add
[    0.299860] bus: 'cpu': add device cpu1
[    0.299992] PM: Adding info for cpu:cpu1
[    0.301007] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.301969] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.302968] mtrr: probably your BIOS does not setup all CPUs.
[    0.303968] mtrr: corrected configuration.
[    0.311821] device: 'default': device_add
[    0.312027] PM: Adding info for No Bus:default
[    0.314526] bio: create slab <bio-0> at 0
[    0.315020] device class 'block': registering
[    0.317769] device class 'misc': registering
[    0.318022] bus: 'serio': registered
[    0.318967] device class 'input': registering
[    0.320006] device class 'power_supply': registering
[    0.320994] device class 'leds': registering
[    0.321795] device class 'net': registering
[    0.322030] device: 'lo': device_add
[    0.323147] PM: Adding info for No Bus:lo
[    0.330653] Switching to clocksource kvm-clock
[    0.332373] Warning: could not register all branches stats
[    0.333365] Warning: could not register annotated branches stats
[    0.413675] device class 'mem': registering
[    0.414493] device: 'mem': device_add
[    0.420754] PM: Adding info for No Bus:mem
[    0.421550] device: 'kmem': device_add
[    0.423861] PM: Adding info for No Bus:kmem
[    0.424642] device: 'null': device_add
[    0.426918] PM: Adding info for No Bus:null
[    0.427694] device: 'zero': device_add
[    0.430025] PM: Adding info for No Bus:zero
[    0.430773] device: 'full': device_add
[    0.433074] PM: Adding info for No Bus:full
[    0.433838] device: 'random': device_add
[    0.436151] PM: Adding info for No Bus:random
[    0.436919] device: 'urandom': device_add
[    0.439276] PM: Adding info for No Bus:urandom
[    0.440100] device: 'kmsg': device_add
[    0.442396] PM: Adding info for No Bus:kmsg
[    0.443148] device: 'tty': device_add
[    0.445317] PM: Adding info for No Bus:tty
[    0.446087] device: 'console': device_add
[    0.448386] PM: Adding info for No Bus:console
[    0.449224] NET: Registered protocol family 1
[    0.450284] Unpacking initramfs...
[    1.877893] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    1.903095] DMA-API: preallocated 32768 debug entries
[    1.903966] DMA-API: debugging enabled by kernel config
[    1.905059] Registering platform device 'rtc_cmos'. Parent at platform
[    1.906178] device: 'rtc_cmos': device_add
[    1.906884] bus: 'platform': add device rtc_cmos
[    1.907727] PM: Adding info for platform:rtc_cmos
[    1.908579] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    1.910170] device: 'snapshot': device_add
[    1.911083] PM: Adding info for No Bus:snapshot
[    1.911949] bus: 'clocksource': registered
[    1.912686] device: 'clocksource': device_add
[    1.913480] PM: Adding info for No Bus:clocksource
[    1.914328] device: 'clocksource0': device_add
[    1.915092] bus: 'clocksource': add device clocksource0
[    1.915985] PM: Adding info for clocksource:clocksource0
[    1.916938] bus: 'platform': add driver alarmtimer
[    1.917799] Registering platform device 'alarmtimer'. Parent at platform
[    1.918948] device: 'alarmtimer': device_add
[    1.919693] bus: 'platform': add device alarmtimer
[    1.920546] PM: Adding info for platform:alarmtimer
[    1.921413] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    1.922931] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    1.924342] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    1.925525] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    1.926945] audit: initializing netlink socket (disabled)
[    1.927924] type=2000 audit(1342139670.926:1): initialized
[    1.941097] Testing tracer function: PASSED
[    2.087999] Testing dynamic ftrace: PASSED
[    2.338209] Testing dynamic ftrace ops #1: (1 0 1 1 0) (1 1 2 1 0) (2 1 3 1 940) (2 2 4 1 1027) PASSED
[    2.431997] Testing dynamic ftrace ops #2: (1 0 1 28 0) (1 1 2 297 0) (2 1 3 1 13) (2 2 4 84 96) PASSED
[    2.540363] bus: 'event_source': registered
[    2.541114] device: 'breakpoint': device_add
[    2.541860] bus: 'event_source': add device breakpoint
[    2.542799] PM: Adding info for event_source:breakpoint
[    2.543767] device: 'tracepoint': device_add
[    2.544535] bus: 'event_source': add device tracepoint
[    2.545493] PM: Adding info for event_source:tracepoint
[    2.546442] device: 'software': device_add
[    2.547170] bus: 'event_source': add device software
[    2.548449] PM: Adding info for event_source:software
[    2.549665] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    2.560548] msgmni has been set to 390
[    2.561843] cryptomgr_test (26) used greatest stack depth: 5736 bytes left
[    2.563190] alg: No test for stdrng (krng)
[    2.564112] device class 'bsg': registering
[    2.564859] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
[    2.566155] io scheduler noop registered (default)
[    2.567035] device: 'ptyp0': device_add
[    2.567860] PM: Adding info for No Bus:ptyp0
[    2.568687] device: 'ptyp1': device_add
[    2.569492] PM: Adding info for No Bus:ptyp1
[    2.570277] device: 'ptyp2': device_add
[    2.571095] PM: Adding info for No Bus:ptyp2
[    2.571873] device: 'ptyp3': device_add
[    2.572693] PM: Adding info for No Bus:ptyp3
[    2.573479] device: 'ptyp4': device_add
[    2.574329] PM: Adding info for No Bus:ptyp4
[    2.575100] device: 'ptyp5': device_add
[    2.575944] PM: Adding info for No Bus:ptyp5
[    2.576723] device: 'ptyp6': device_add
[    2.577525] PM: Adding info for No Bus:ptyp6
[    2.578310] device: 'ptyp7': device_add
[    2.579113] PM: Adding info for No Bus:ptyp7
[    2.579872] device: 'ptyp8': device_add
[    2.580685] PM: Adding info for No Bus:ptyp8
[    2.581469] device: 'ptyp9': device_add
[    2.582286] PM: Adding info for No Bus:ptyp9
[    2.583057] device: 'ptypa': device_add
[    2.583831] PM: Adding info for No Bus:ptypa
[    2.584604] device: 'ptypb': device_add
[    2.585418] PM: Adding info for No Bus:ptypb
[    2.586179] device: 'ptypc': device_add
[    2.586966] PM: Adding info for No Bus:ptypc
[    2.587739] device: 'ptypd': device_add
[    2.588551] PM: Adding info for No Bus:ptypd
[    2.589341] device: 'ptype': device_add
[    2.590252] PM: Adding info for No Bus:ptype
[    2.590998] device: 'ptypf': device_add
[    2.591795] PM: Adding info for No Bus:ptypf
[    2.592572] device: 'ptyq0': device_add
[    2.593400] PM: Adding info for No Bus:ptyq0
[    2.594164] device: 'ptyq1': device_add
[    2.594938] PM: Adding info for No Bus:ptyq1
[    2.595710] device: 'ptyq2': device_add
[    2.596515] PM: Adding info for No Bus:ptyq2
[    2.597330] device: 'ptyq3': device_add
[    2.598162] PM: Adding info for No Bus:ptyq3
[    2.598944] device: 'ptyq4': device_add
[    2.599757] PM: Adding info for No Bus:ptyq4
[    2.600545] device: 'ptyq5': device_add
[    2.601363] PM: Adding info for No Bus:ptyq5
[    2.602125] device: 'ptyq6': device_add
[    2.602949] PM: Adding info for No Bus:ptyq6
[    2.603723] device: 'ptyq7': device_add
[    2.604536] PM: Adding info for No Bus:ptyq7
[    2.605313] device: 'ptyq8': device_add
[    2.606115] PM: Adding info for No Bus:ptyq8
[    2.606877] device: 'ptyq9': device_add
[    2.607708] PM: Adding info for No Bus:ptyq9
[    2.608497] device: 'ptyqa': device_add
[    2.609318] PM: Adding info for No Bus:ptyqa
[    2.610081] device: 'ptyqb': device_add
[    2.610861] PM: Adding info for No Bus:ptyqb
[    2.611631] device: 'ptyqc': device_add
[    2.612444] PM: Adding info for No Bus:ptyqc
[    2.613210] device: 'ptyqd': device_add
[    2.613981] PM: Adding info for No Bus:ptyqd
[    2.614764] device: 'ptyqe': device_add
[    2.615591] PM: Adding info for No Bus:ptyqe
[    2.616375] device: 'ptyqf': device_add
[    2.617165] PM: Adding info for No Bus:ptyqf
[    2.617915] device: 'ptyr0': device_add
[    2.618743] PM: Adding info for No Bus:ptyr0
[    2.619519] device: 'ptyr1': device_add
[    2.620410] PM: Adding info for No Bus:ptyr1
[    2.621175] device: 'ptyr2': device_add
[    2.621952] PM: Adding info for No Bus:ptyr2
[    2.622761] device: 'ptyr3': device_add
[    2.623590] PM: Adding info for No Bus:ptyr3
[    2.624382] device: 'ptyr4': device_add
[    2.625194] PM: Adding info for No Bus:ptyr4
[    2.625964] device: 'ptyr5': device_add
[    2.626783] PM: Adding info for No Bus:ptyr5
[    2.627559] device: 'ptyr6': device_add
[    2.628369] PM: Adding info for No Bus:ptyr6
[    2.629133] device: 'ptyr7': device_add
[    2.629969] PM: Adding info for No Bus:ptyr7
[    2.630741] device: 'ptyr8': device_add
[    2.631555] PM: Adding info for No Bus:ptyr8
[    2.632348] device: 'ptyr9': device_add
[    2.633151] PM: Adding info for No Bus:ptyr9
[    2.633911] device: 'ptyra': device_add
[    2.634730] PM: Adding info for No Bus:ptyra
[    2.635504] device: 'ptyrb': device_add
[    2.636326] PM: Adding info for No Bus:ptyrb
[    2.637086] device: 'ptyrc': device_add
[    2.637887] PM: Adding info for No Bus:ptyrc
[    2.638679] device: 'ptyrd': device_add
[    2.639477] PM: Adding info for No Bus:ptyrd
[    2.640237] device: 'ptyre': device_add
[    2.641060] PM: Adding info for No Bus:ptyre
[    2.641829] device: 'ptyrf': device_add
[    2.642658] PM: Adding info for No Bus:ptyrf
[    2.643433] device: 'ptys0': device_add
[    2.644216] PM: Adding info for No Bus:ptys0
[    2.644963] device: 'ptys1': device_add
[    2.645775] PM: Adding info for No Bus:ptys1
[    2.646550] device: 'ptys2': device_add
[    2.647346] PM: Adding info for No Bus:ptys2
[    2.648134] device: 'ptys3': device_add
[    2.648943] PM: Adding info for No Bus:ptys3
[    2.649735] device: 'ptys4': device_add
[    2.650649] PM: Adding info for No Bus:ptys4
[    2.651445] device: 'ptys5': device_add
[    2.652265] PM: Adding info for No Bus:ptys5
[    2.653031] device: 'ptys6': device_add
[    2.653830] PM: Adding info for No Bus:ptys6
[    2.654604] device: 'ptys7': device_add
[    2.655402] PM: Adding info for No Bus:ptys7
[    2.656162] device: 'ptys8': device_add
[    2.656994] PM: Adding info for No Bus:ptys8
[    2.657777] device: 'ptys9': device_add
[    2.658606] PM: Adding info for No Bus:ptys9
[    2.659397] device: 'ptysa': device_add
[    2.660209] PM: Adding info for No Bus:ptysa
[    2.660961] device: 'ptysb': device_add
[    2.661761] PM: Adding info for No Bus:ptysb
[    2.662534] device: 'ptysc': device_add
[    2.663346] PM: Adding info for No Bus:ptysc
[    2.664106] device: 'ptysd': device_add
[    2.664899] PM: Adding info for No Bus:ptysd
[    2.665672] device: 'ptyse': device_add
[    2.666472] PM: Adding info for No Bus:ptyse
[    2.667259] device: 'ptysf': device_add
[    2.668082] PM: Adding info for No Bus:ptysf
[    2.668851] device: 'ptyt0': device_add
[    2.669657] PM: Adding info for No Bus:ptyt0
[    2.670428] device: 'ptyt1': device_add
[    2.671233] PM: Adding info for No Bus:ptyt1
[    2.671982] device: 'ptyt2': device_add
[    2.672780] PM: Adding info for No Bus:ptyt2
[    2.673586] device: 'ptyt3': device_add
[    2.674402] PM: Adding info for No Bus:ptyt3
[    2.675173] device: 'ptyt4': device_add
[    2.675995] PM: Adding info for No Bus:ptyt4
[    2.676805] device: 'ptyt5': device_add
[    2.677725] PM: Adding info for No Bus:ptyt5
[    2.678507] device: 'ptyt6': device_add
[    2.679343] PM: Adding info for No Bus:ptyt6
[    2.680165] device: 'ptyt7': device_add
[    2.680943] PM: Adding info for No Bus:ptyt7
[    2.681712] device: 'ptyt8': device_add
[    2.682524] PM: Adding info for No Bus:ptyt8
[    2.683294] device: 'ptyt9': device_add
[    2.684136] PM: Adding info for No Bus:ptyt9
[    2.684897] device: 'ptyta': device_add
[    2.685729] PM: Adding info for No Bus:ptyta
[    2.686503] device: 'ptytb': device_add
[    2.687320] PM: Adding info for No Bus:ptytb
[    2.688083] device: 'ptytc': device_add
[    2.688876] PM: Adding info for No Bus:ptytc
[    2.689644] device: 'ptytd': device_add
[    2.690456] PM: Adding info for No Bus:ptytd
[    2.691218] device: 'ptyte': device_add
[    2.691995] PM: Adding info for No Bus:ptyte
[    2.692781] device: 'ptytf': device_add
[    2.693607] PM: Adding info for No Bus:ptytf
[    2.694392] device: 'ptyu0': device_add
[    2.695182] PM: Adding info for No Bus:ptyu0
[    2.695933] device: 'ptyu1': device_add
[    2.696746] PM: Adding info for No Bus:ptyu1
[    2.697516] device: 'ptyu2': device_add
[    2.698371] PM: Adding info for No Bus:ptyu2
[    2.699166] device: 'ptyu3': device_add
[    2.699967] PM: Adding info for No Bus:ptyu3
[    2.700743] device: 'ptyu4': device_add
[    2.701587] PM: Adding info for No Bus:ptyu4
[    2.702392] device: 'ptyu5': device_add
[    2.703192] PM: Adding info for No Bus:ptyu5
[    2.703944] device: 'ptyu6': device_add
[    2.704762] PM: Adding info for No Bus:ptyu6
[    2.705538] device: 'ptyu7': device_add
[    2.706334] PM: Adding info for No Bus:ptyu7
[    2.707093] device: 'ptyu8': device_add
[    2.707894] PM: Adding info for No Bus:ptyu8
[    2.708686] device: 'ptyu9': device_add
[    2.709503] PM: Adding info for No Bus:ptyu9
[    2.710368] device: 'ptyua': device_add
[    2.711209] PM: Adding info for No Bus:ptyua
[    2.711966] device: 'ptyub': device_add
[    2.712873] PM: Adding info for No Bus:ptyub
[    2.713652] device: 'ptyuc': device_add
[    2.714448] PM: Adding info for No Bus:ptyuc
[    2.715210] device: 'ptyud': device_add
[    2.716008] PM: Adding info for No Bus:ptyud
[    2.716780] device: 'ptyue': device_add
[    2.717578] PM: Adding info for No Bus:ptyue
[    2.718367] device: 'ptyuf': device_add
[    2.719187] PM: Adding info for No Bus:ptyuf
[    2.719954] device: 'ptyv0': device_add
[    2.720776] PM: Adding info for No Bus:ptyv0
[    2.721552] device: 'ptyv1': device_add
[    2.722418] PM: Adding info for No Bus:ptyv1
[    2.723180] device: 'ptyv2': device_add
[    2.724095] PM: Adding info for No Bus:ptyv2
[    2.724884] device: 'ptyv3': device_add
[    2.725769] PM: Adding info for No Bus:ptyv3
[    2.726544] device: 'ptyv4': device_add
[    2.727500] PM: Adding info for No Bus:ptyv4
[    2.728325] device: 'ptyv5': device_add
[    2.729140] PM: Adding info for No Bus:ptyv5
[    2.729889] device: 'ptyv6': device_add
[    2.730726] PM: Adding info for No Bus:ptyv6
[    2.731504] device: 'ptyv7': device_add
[    2.732445] PM: Adding info for No Bus:ptyv7
[    2.733206] device: 'ptyv8': device_add
[    2.734081] PM: Adding info for No Bus:ptyv8
[    2.734831] device: 'ptyv9': device_add
[    2.735716] PM: Adding info for No Bus:ptyv9
[    2.736502] device: 'ptyva': device_add
[    2.737435] PM: Adding info for No Bus:ptyva
[    2.738212] device: 'ptyvb': device_add
[    2.739086] PM: Adding info for No Bus:ptyvb
[    2.739837] device: 'ptyvc': device_add
[    2.740723] PM: Adding info for No Bus:ptyvc
[    2.741497] device: 'ptyvd': device_add
[    2.742336] PM: Adding info for No Bus:ptyvd
[    2.743093] device: 'ptyve': device_add
[    2.743890] PM: Adding info for No Bus:ptyve
[    2.744667] device: 'ptyvf': device_add
[    2.745476] PM: Adding info for No Bus:ptyvf
[    2.746265] device: 'ptyw0': device_add
[    2.747090] PM: Adding info for No Bus:ptyw0
[    2.747841] device: 'ptyw1': device_add
[    2.748653] PM: Adding info for No Bus:ptyw1
[    2.749428] device: 'ptyw2': device_add
[    2.750230] PM: Adding info for No Bus:ptyw2
[    2.751033] device: 'ptyw3': device_add
[    2.751816] PM: Adding info for No Bus:ptyw3
[    2.752589] device: 'ptyw4': device_add
[    2.753422] PM: Adding info for No Bus:ptyw4
[    2.754213] device: 'ptyw5': device_add
[    2.755052] PM: Adding info for No Bus:ptyw5
[    2.755813] device: 'ptyw6': device_add
[    2.756721] PM: Adding info for No Bus:ptyw6
[    2.757502] device: 'ptyw7': device_add
[    2.758327] PM: Adding info for No Bus:ptyw7
[    2.759087] device: 'ptyw8': device_add
[    2.759882] PM: Adding info for No Bus:ptyw8
[    2.760655] device: 'ptyw9': device_add
[    2.761472] PM: Adding info for No Bus:ptyw9
[    2.762255] device: 'ptywa': device_add
[    2.763062] PM: Adding info for No Bus:ptywa
[    2.763826] device: 'ptywb': device_add
[    2.764645] PM: Adding info for No Bus:ptywb
[    2.765420] device: 'ptywc': device_add
[    2.766272] PM: Adding info for No Bus:ptywc
[    2.767041] device: 'ptywd': device_add
[    2.767827] PM: Adding info for No Bus:ptywd
[    2.768607] device: 'ptywe': device_add
[    2.769421] PM: Adding info for No Bus:ptywe
[    2.770262] device: 'ptywf': device_add
[    2.771064] PM: Adding info for No Bus:ptywf
[    2.771835] device: 'ptyx0': device_add
[    2.772670] PM: Adding info for No Bus:ptyx0
[    2.773444] device: 'ptyx1': device_add
[    2.774231] PM: Adding info for No Bus:ptyx1
[    2.774978] device: 'ptyx2': device_add
[    2.775811] PM: Adding info for No Bus:ptyx2
[    2.776619] device: 'ptyx3': device_add
[    2.777442] PM: Adding info for No Bus:ptyx3
[    2.778202] device: 'ptyx4': device_add
[    2.779048] PM: Adding info for No Bus:ptyx4
[    2.779823] device: 'ptyx5': device_add
[    2.780653] PM: Adding info for No Bus:ptyx5
[    2.781441] device: 'ptyx6': device_add
[    2.782229] PM: Adding info for No Bus:ptyx6
[    2.782979] device: 'ptyx7': device_add
[    2.783883] PM: Adding info for No Bus:ptyx7
[    2.784659] device: 'ptyx8': device_add
[    2.785541] PM: Adding info for No Bus:ptyx8
[    2.786307] device: 'ptyx9': device_add
[    2.787205] PM: Adding info for No Bus:ptyx9
[    2.787955] device: 'ptyxa': device_add
[    2.788797] PM: Adding info for No Bus:ptyxa
[    2.789596] device: 'ptyxb': device_add
[    2.790419] PM: Adding info for No Bus:ptyxb
[    2.791188] device: 'ptyxc': device_add
[    2.792099] PM: Adding info for No Bus:ptyxc
[    2.792849] device: 'ptyxd': device_add
[    2.793809] PM: Adding info for No Bus:ptyxd
[    2.794582] device: 'ptyxe': device_add
[    2.795471] PM: Adding info for No Bus:ptyxe
[    2.796232] device: 'ptyxf': device_add
[    2.797104] PM: Adding info for No Bus:ptyxf
[    2.797869] device: 'ptyy0': device_add
[    2.798705] PM: Adding info for No Bus:ptyy0
[    2.799486] device: 'ptyy1': device_add
[    2.800389] PM: Adding info for No Bus:ptyy1
[    2.801152] device: 'ptyy2': device_add
[    2.801928] PM: Adding info for No Bus:ptyy2
[    2.802729] device: 'ptyy3': device_add
[    2.803547] PM: Adding info for No Bus:ptyy3
[    2.804321] device: 'ptyy4': device_add
[    2.805132] PM: Adding info for No Bus:ptyy4
[    2.805897] device: 'ptyy5': device_add
[    2.806726] PM: Adding info for No Bus:ptyy5
[    2.807510] device: 'ptyy6': device_add
[    2.808326] PM: Adding info for No Bus:ptyy6
[    2.809083] device: 'ptyy7': device_add
[    2.809890] PM: Adding info for No Bus:ptyy7
[    2.810662] device: 'ptyy8': device_add
[    2.811476] PM: Adding info for No Bus:ptyy8
[    2.812251] device: 'ptyy9': device_add
[    2.813044] PM: Adding info for No Bus:ptyy9
[    2.813794] device: 'ptyya': device_add
[    2.814610] PM: Adding info for No Bus:ptyya
[    2.815401] device: 'ptyyb': device_add
[    2.816204] PM: Adding info for No Bus:ptyyb
[    2.816969] device: 'ptyyc': device_add
[    2.817779] PM: Adding info for No Bus:ptyyc
[    2.818568] device: 'ptyyd': device_add
[    2.819372] PM: Adding info for No Bus:ptyyd
[    2.820130] device: 'ptyye': device_add
[    2.820962] PM: Adding info for No Bus:ptyye
[    2.821735] device: 'ptyyf': device_add
[    2.822548] PM: Adding info for No Bus:ptyyf
[    2.823326] device: 'ptyz0': device_add
[    2.824123] PM: Adding info for No Bus:ptyz0
[    2.824886] device: 'ptyz1': device_add
[    2.825710] PM: Adding info for No Bus:ptyz1
[    2.826488] device: 'ptyz2': device_add
[    2.827283] PM: Adding info for No Bus:ptyz2
[    2.828085] device: 'ptyz3': device_add
[    2.828895] PM: Adding info for No Bus:ptyz3
[    2.829672] device: 'ptyz4': device_add
[    2.830567] PM: Adding info for No Bus:ptyz4
[    2.831354] device: 'ptyz5': device_add
[    2.832178] PM: Adding info for No Bus:ptyz5
[    2.832942] device: 'ptyz6': device_add
[    2.833776] PM: Adding info for No Bus:ptyz6
[    2.834553] device: 'ptyz7': device_add
[    2.835352] PM: Adding info for No Bus:ptyz7
[    2.836114] device: 'ptyz8': device_add
[    2.836906] PM: Adding info for No Bus:ptyz8
[    2.837681] device: 'ptyz9': device_add
[    2.838488] PM: Adding info for No Bus:ptyz9
[    2.839264] device: 'ptyza': device_add
[    2.840073] PM: Adding info for No Bus:ptyza
[    2.840831] device: 'ptyzb': device_add
[    2.841642] PM: Adding info for No Bus:ptyzb
[    2.842430] device: 'ptyzc': device_add
[    2.843238] PM: Adding info for No Bus:ptyzc
[    2.843995] device: 'ptyzd': device_add
[    2.844808] PM: Adding info for No Bus:ptyzd
[    2.845584] device: 'ptyze': device_add
[    2.846381] PM: Adding info for No Bus:ptyze
[    2.847141] device: 'ptyzf': device_add
[    2.847975] PM: Adding info for No Bus:ptyzf
[    2.848761] device: 'ptya0': device_add
[    2.849573] PM: Adding info for No Bus:ptya0
[    2.850360] device: 'ptya1': device_add
[    2.851179] PM: Adding info for No Bus:ptya1
[    2.851930] device: 'ptya2': device_add
[    2.852729] PM: Adding info for No Bus:ptya2
[    2.853533] device: 'ptya3': device_add
[    2.854356] PM: Adding info for No Bus:ptya3
[    2.855119] device: 'ptya4': device_add
[    2.855931] PM: Adding info for No Bus:ptya4
[    2.856721] device: 'ptya5': device_add
[    2.857531] PM: Adding info for No Bus:ptya5
[    2.858325] device: 'ptya6': device_add
[    2.859143] PM: Adding info for No Bus:ptya6
[    2.859994] device: 'ptya7': device_add
[    2.860792] PM: Adding info for No Bus:ptya7
[    2.861559] device: 'ptya8': device_add
[    2.862372] PM: Adding info for No Bus:ptya8
[    2.863136] device: 'ptya9': device_add
[    2.863912] PM: Adding info for No Bus:ptya9
[    2.864687] device: 'ptyaa': device_add
[    2.865502] PM: Adding info for No Bus:ptyaa
[    2.866275] device: 'ptyab': device_add
[    2.867093] PM: Adding info for No Bus:ptyab
[    2.867865] device: 'ptyac': device_add
[    2.868697] PM: Adding info for No Bus:ptyac
[    2.869475] device: 'ptyad': device_add
[    2.870294] PM: Adding info for No Bus:ptyad
[    2.871061] device: 'ptyae': device_add
[    2.871837] PM: Adding info for No Bus:ptyae
[    2.872608] device: 'ptyaf': device_add
[    2.873422] PM: Adding info for No Bus:ptyaf
[    2.874185] device: 'ptyb0': device_add
[    2.875023] PM: Adding info for No Bus:ptyb0
[    2.875784] device: 'ptyb1': device_add
[    2.876610] PM: Adding info for No Bus:ptyb1
[    2.877390] device: 'ptyb2': device_add
[    2.878203] PM: Adding info for No Bus:ptyb2
[    2.878995] device: 'ptyb3': device_add
[    2.879798] PM: Adding info for No Bus:ptyb3
[    2.880568] device: 'ptyb4': device_add
[    2.881406] PM: Adding info for No Bus:ptyb4
[    2.882185] device: 'ptyb5': device_add
[    2.882964] PM: Adding info for No Bus:ptyb5
[    2.883733] device: 'ptyb6': device_add
[    2.884551] PM: Adding info for No Bus:ptyb6
[    2.885342] device: 'ptyb7': device_add
[    2.886148] PM: Adding info for No Bus:ptyb7
[    2.886901] device: 'ptyb8': device_add
[    2.887717] PM: Adding info for No Bus:ptyb8
[    2.888503] device: 'ptyb9': device_add
[    2.889346] PM: Adding info for No Bus:ptyb9
[    2.890200] device: 'ptyba': device_add
[    2.890993] PM: Adding info for No Bus:ptyba
[    2.891770] device: 'ptybb': device_add
[    2.892581] PM: Adding info for No Bus:ptybb
[    2.893365] device: 'ptybc': device_add
[    2.894169] PM: Adding info for No Bus:ptybc
[    2.894937] device: 'ptybd': device_add
[    2.895806] PM: Adding info for No Bus:ptybd
[    2.896580] device: 'ptybe': device_add
[    2.897380] PM: Adding info for No Bus:ptybe
[    2.898142] device: 'ptybf': device_add
[    2.898949] PM: Adding info for No Bus:ptybf
[    2.899727] device: 'ptyc0': device_add
[    2.900538] PM: Adding info for No Bus:ptyc0
[    2.901315] device: 'ptyc1': device_add
[    2.902151] PM: Adding info for No Bus:ptyc1
[    2.902915] device: 'ptyc2': device_add
[    2.903746] PM: Adding info for No Bus:ptyc2
[    2.904553] device: 'ptyc3': device_add
[    2.905354] PM: Adding info for No Bus:ptyc3
[    2.906116] device: 'ptyc4': device_add
[    2.906926] PM: Adding info for No Bus:ptyc4
[    2.907714] device: 'ptyc5': device_add
[    2.908626] PM: Adding info for No Bus:ptyc5
[    2.909401] device: 'ptyc6': device_add
[    2.910205] PM: Adding info for No Bus:ptyc6
[    2.910970] device: 'ptyc7': device_add
[    2.911800] PM: Adding info for No Bus:ptyc7
[    2.912588] device: 'ptyc8': device_add
[    2.913391] PM: Adding info for No Bus:ptyc8
[    2.914150] device: 'ptyc9': device_add
[    2.915065] PM: Adding info for No Bus:ptyc9
[    2.915816] device: 'ptyca': device_add
[    2.916703] PM: Adding info for No Bus:ptyca
[    2.917474] device: 'ptycb': device_add
[    2.918415] PM: Adding info for No Bus:ptycb
[    2.919181] device: 'ptycc': device_add
[    2.919988] PM: Adding info for No Bus:ptycc
[    2.920919] device: 'ptycd': device_add
[    2.921787] PM: Adding info for No Bus:ptycd
[    2.922593] device: 'ptyce': device_add
[    2.923485] PM: Adding info for No Bus:ptyce
[    2.924261] device: 'ptycf': device_add
[    2.925108] PM: Adding info for No Bus:ptycf
[    2.925857] device: 'ptyd0': device_add
[    2.926738] PM: Adding info for No Bus:ptyd0
[    2.927515] device: 'ptyd1': device_add
[    2.928387] PM: Adding info for No Bus:ptyd1
[    2.929159] device: 'ptyd2': device_add
[    2.930059] PM: Adding info for No Bus:ptyd2
[    2.930840] device: 'ptyd3': device_add
[    2.931645] PM: Adding info for No Bus:ptyd3
[    2.932417] device: 'ptyd4': device_add
[    2.933239] PM: Adding info for No Bus:ptyd4
[    2.934032] device: 'ptyd5': device_add
[    2.934827] PM: Adding info for No Bus:ptyd5
[    2.935599] device: 'ptyd6': device_add
[    2.936399] PM: Adding info for No Bus:ptyd6
[    2.937173] device: 'ptyd7': device_add
[    2.937978] PM: Adding info for No Bus:ptyd7
[    2.938784] device: 'ptyd8': device_add
[    2.939587] PM: Adding info for No Bus:ptyd8
[    2.940353] device: 'ptyd9': device_add
[    2.941162] PM: Adding info for No Bus:ptyd9
[    2.941916] device: 'ptyda': device_add
[    2.942716] PM: Adding info for No Bus:ptyda
[    2.943486] device: 'ptydb': device_add
[    2.944309] PM: Adding info for No Bus:ptydb
[    2.945071] device: 'ptydc': device_add
[    2.945877] PM: Adding info for No Bus:ptydc
[    2.946667] device: 'ptydd': device_add
[    2.947478] PM: Adding info for No Bus:ptydd
[    2.948236] device: 'ptyde': device_add
[    2.949061] PM: Adding info for No Bus:ptyde
[    2.949882] device: 'ptydf': device_add
[    2.950680] PM: Adding info for No Bus:ptydf
[    2.951456] device: 'ptye0': device_add
[    2.952271] PM: Adding info for No Bus:ptye0
[    2.953040] device: 'ptye1': device_add
[    2.953819] PM: Adding info for No Bus:ptye1
[    2.954603] device: 'ptye2': device_add
[    2.955431] PM: Adding info for No Bus:ptye2
[    2.956233] device: 'ptye3': device_add
[    2.957093] PM: Adding info for No Bus:ptye3
[    2.957843] device: 'ptye4': device_add
[    2.958674] PM: Adding info for No Bus:ptye4
[    2.959462] device: 'ptye5': device_add
[    2.960274] PM: Adding info for No Bus:ptye5
[    2.961038] device: 'ptye6': device_add
[    2.961817] PM: Adding info for No Bus:ptye6
[    2.962592] device: 'ptye7': device_add
[    2.963423] PM: Adding info for No Bus:ptye7
[    2.964207] device: 'ptye8': device_add
[    2.965002] PM: Adding info for No Bus:ptye8
[    2.965799] device: 'ptye9': device_add
[    2.966631] PM: Adding info for No Bus:ptye9
[    2.967429] device: 'ptyea': device_add
[    2.968275] PM: Adding info for No Bus:ptyea
[    2.969064] device: 'ptyeb': device_add
[    2.969856] PM: Adding info for No Bus:ptyeb
[    2.970645] device: 'ptyec': device_add
[    2.971479] PM: Adding info for No Bus:ptyec
[    2.972274] device: 'ptyed': device_add
[    2.973082] PM: Adding info for No Bus:ptyed
[    2.973846] device: 'ptyee': device_add
[    2.974673] PM: Adding info for No Bus:ptyee
[    2.975449] device: 'ptyef': device_add
[    2.976237] PM: Adding info for No Bus:ptyef
[    2.976991] device: 'ttyp0': device_add
[    2.977809] PM: Adding info for No Bus:ttyp0
[    2.978596] device: 'ttyp1': device_add
[    2.979415] PM: Adding info for No Bus:ttyp1
[    2.980256] device: 'ttyp2': device_add
[    2.981058] PM: Adding info for No Bus:ttyp2
[    2.981854] device: 'ttyp3': device_add
[    2.982690] PM: Adding info for No Bus:ttyp3
[    2.983475] device: 'ttyp4': device_add
[    2.984337] PM: Adding info for No Bus:ttyp4
[    2.985111] device: 'ttyp5': device_add
[    2.985911] PM: Adding info for No Bus:ttyp5
[    2.986683] device: 'ttyp6': device_add
[    2.987515] PM: Adding info for No Bus:ttyp6
[    2.988297] device: 'ttyp7': device_add
[    2.989107] PM: Adding info for No Bus:ttyp7
[    2.989863] device: 'ttyp8': device_add
[    2.990689] PM: Adding info for No Bus:ttyp8
[    2.991492] device: 'ttyp9': device_add
[    2.992300] PM: Adding info for No Bus:ttyp9
[    2.993065] device: 'ttypa': device_add
[    2.993862] PM: Adding info for No Bus:ttypa
[    2.994638] device: 'ttypb': device_add
[    2.995438] PM: Adding info for No Bus:ttypb
[    2.996196] device: 'ttypc': device_add
[    2.996987] PM: Adding info for No Bus:ttypc
[    2.997761] device: 'ttypd': device_add
[    2.998577] PM: Adding info for No Bus:ttypd
[    2.999365] device: 'ttype': device_add
[    3.000187] PM: Adding info for No Bus:ttype
[    3.000939] device: 'ttypf': device_add
[    3.001756] PM: Adding info for No Bus:ttypf
[    3.002533] device: 'ttyq0': device_add
[    3.003348] PM: Adding info for No Bus:ttyq0
[    3.004110] device: 'ttyq1': device_add
[    3.004906] PM: Adding info for No Bus:ttyq1
[    3.005686] device: 'ttyq2': device_add
[    3.006481] PM: Adding info for No Bus:ttyq2
[    3.007293] device: 'ttyq3': device_add
[    3.008117] PM: Adding info for No Bus:ttyq3
[    3.008895] device: 'ttyq4': device_add
[    3.009868] PM: Adding info for No Bus:ttyq4
[    3.010751] device: 'ttyq5': device_add
[    3.011611] PM: Adding info for No Bus:ttyq5
[    3.012386] device: 'ttyq6': device_add
[    3.013190] PM: Adding info for No Bus:ttyq6
[    3.013944] device: 'ttyq7': device_add
[    3.014749] PM: Adding info for No Bus:ttyq7
[    3.015524] device: 'ttyq8': device_add
[    3.016364] PM: Adding info for No Bus:ttyq8
[    3.017146] device: 'ttyq9': device_add
[    3.017939] PM: Adding info for No Bus:ttyq9
[    3.018728] device: 'ttyqa': device_add
[    3.019544] PM: Adding info for No Bus:ttyqa
[    3.020317] device: 'ttyqb': device_add
[    3.021110] PM: Adding info for No Bus:ttyqb
[    3.021863] device: 'ttyqc': device_add
[    3.022680] PM: Adding info for No Bus:ttyqc
[    3.023452] device: 'ttyqd': device_add
[    3.024268] PM: Adding info for No Bus:ttyqd
[    3.025054] device: 'ttyqe': device_add
[    3.025844] PM: Adding info for No Bus:ttyqe
[    3.026625] device: 'ttyqf': device_add
[    3.027537] PM: Adding info for No Bus:ttyqf
[    3.028323] device: 'ttyr0': device_add
[    3.029115] PM: Adding info for No Bus:ttyr0
[    3.029863] device: 'ttyr1': device_add
[    3.030676] PM: Adding info for No Bus:ttyr1
[    3.031451] device: 'ttyr2': device_add
[    3.032239] PM: Adding info for No Bus:ttyr2
[    3.033054] device: 'ttyr3': device_add
[    3.033868] PM: Adding info for No Bus:ttyr3
[    3.034656] device: 'ttyr4': device_add
[    3.035497] PM: Adding info for No Bus:ttyr4
[    3.036290] device: 'ttyr5': device_add
[    3.037081] PM: Adding info for No Bus:ttyr5
[    3.037834] device: 'ttyr6': device_add
[    3.038699] PM: Adding info for No Bus:ttyr6
[    3.039523] device: 'ttyr7': device_add
[    3.040362] PM: Adding info for No Bus:ttyr7
[    3.041124] device: 'ttyr8': device_add
[    3.041923] PM: Adding info for No Bus:ttyr8
[    3.042711] device: 'ttyr9': device_add
[    3.043518] PM: Adding info for No Bus:ttyr9
[    3.044291] device: 'ttyra': device_add
[    3.045106] PM: Adding info for No Bus:ttyra
[    3.045856] device: 'ttyrb': device_add
[    3.046671] PM: Adding info for No Bus:ttyrb
[    3.047448] device: 'ttyrc': device_add
[    3.048237] PM: Adding info for No Bus:ttyrc
[    3.048996] device: 'ttyrd': device_add
[    3.049815] PM: Adding info for No Bus:ttyrd
[    3.050593] device: 'ttyre': device_add
[    3.051407] PM: Adding info for No Bus:ttyre
[    3.052178] device: 'ttyrf': device_add
[    3.052983] PM: Adding info for No Bus:ttyrf
[    3.053760] device: 'ttys0': device_add
[    3.054557] PM: Adding info for No Bus:ttys0
[    3.055331] device: 'ttys1': device_add
[    3.056142] PM: Adding info for No Bus:ttys1
[    3.056894] device: 'ttys2': device_add
[    3.057711] PM: Adding info for No Bus:ttys2
[    3.058530] device: 'ttys3': device_add
[    3.059348] PM: Adding info for No Bus:ttys3
[    3.060126] device: 'ttys4': device_add
[    3.060952] PM: Adding info for No Bus:ttys4
[    3.061761] device: 'ttys5': device_add
[    3.062568] PM: Adding info for No Bus:ttys5
[    3.063340] device: 'ttys6': device_add
[    3.064139] PM: Adding info for No Bus:ttys6
[    3.064892] device: 'ttys7': device_add
[    3.065730] PM: Adding info for No Bus:ttys7
[    3.066504] device: 'ttys8': device_add
[    3.067329] PM: Adding info for No Bus:ttys8
[    3.068094] device: 'ttys9': device_add
[    3.068917] PM: Adding info for No Bus:ttys9
[    3.069715] device: 'ttysa': device_add
[    3.070589] PM: Adding info for No Bus:ttysa
[    3.071363] device: 'ttysb': device_add
[    3.072175] PM: Adding info for No Bus:ttysb
[    3.072925] device: 'ttysc': device_add
[    3.073730] PM: Adding info for No Bus:ttysc
[    3.074501] device: 'ttysd': device_add
[    3.075322] PM: Adding info for No Bus:ttysd
[    3.076085] device: 'ttyse': device_add
[    3.076869] PM: Adding info for No Bus:ttyse
[    3.077658] device: 'ttysf': device_add
[    3.078498] PM: Adding info for No Bus:ttysf
[    3.079273] device: 'ttyt0': device_add
[    3.080082] PM: Adding info for No Bus:ttyt0
[    3.080835] device: 'ttyt1': device_add
[    3.081635] PM: Adding info for No Bus:ttyt1
[    3.082408] device: 'ttyt2': device_add
[    3.083229] PM: Adding info for No Bus:ttyt2
[    3.084032] device: 'ttyt3': device_add
[    3.084841] PM: Adding info for No Bus:ttyt3
[    3.085628] device: 'ttyt4': device_add
[    3.086474] PM: Adding info for No Bus:ttyt4
[    3.087274] device: 'ttyt5': device_add
[    3.088077] PM: Adding info for No Bus:ttyt5
[    3.088844] device: 'ttyt6': device_add
[    3.089663] PM: Adding info for No Bus:ttyt6
[    3.090435] device: 'ttyt7': device_add
[    3.091239] PM: Adding info for No Bus:ttyt7
[    3.091992] device: 'ttyt8': device_add
[    3.092834] PM: Adding info for No Bus:ttyt8
[    3.093608] device: 'ttyt9': device_add
[    3.094436] PM: Adding info for No Bus:ttyt9
[    3.095213] device: 'ttyta': device_add
[    3.096035] PM: Adding info for No Bus:ttyta
[    3.096784] device: 'ttytb': device_add
[    3.097604] PM: Adding info for No Bus:ttytb
[    3.098391] device: 'ttytc': device_add
[    3.099178] PM: Adding info for No Bus:ttytc
[    3.099925] device: 'ttytd': device_add
[    3.100815] PM: Adding info for No Bus:ttytd
[    3.101592] device: 'ttyte': device_add
[    3.102408] PM: Adding info for No Bus:ttyte
[    3.103184] device: 'ttytf': device_add
[    3.103981] PM: Adding info for No Bus:ttytf
[    3.104771] device: 'ttyu0': device_add
[    3.105592] PM: Adding info for No Bus:ttyu0
[    3.106370] device: 'ttyu1': device_add
[    3.107163] PM: Adding info for No Bus:ttyu1
[    3.107913] device: 'ttyu2': device_add
[    3.108743] PM: Adding info for No Bus:ttyu2
[    3.109558] device: 'ttyu3': device_add
[    3.110363] PM: Adding info for No Bus:ttyu3
[    3.111125] device: 'ttyu4': device_add
[    3.111951] PM: Adding info for No Bus:ttyu4
[    3.112754] device: 'ttyu5': device_add
[    3.113589] PM: Adding info for No Bus:ttyu5
[    3.114364] device: 'ttyu6': device_add
[    3.115157] PM: Adding info for No Bus:ttyu6
[    3.115905] device: 'ttyu7': device_add
[    3.116726] PM: Adding info for No Bus:ttyu7
[    3.117499] device: 'ttyu8': device_add
[    3.118324] PM: Adding info for No Bus:ttyu8
[    3.119085] device: 'ttyu9': device_add
[    3.119935] PM: Adding info for No Bus:ttyu9
[    3.120725] device: 'ttyua': device_add
[    3.121548] PM: Adding info for No Bus:ttyua
[    3.122324] device: 'ttyub': device_add
[    3.123128] PM: Adding info for No Bus:ttyub
[    3.123878] device: 'ttyuc': device_add
[    3.124693] PM: Adding info for No Bus:ttyuc
[    3.125467] device: 'ttyud': device_add
[    3.126270] PM: Adding info for No Bus:ttyud
[    3.127030] device: 'ttyue': device_add
[    3.127826] PM: Adding info for No Bus:ttyue
[    3.128616] device: 'ttyuf': device_add
[    3.129428] PM: Adding info for No Bus:ttyuf
[    3.130259] device: 'ttyv0': device_add
[    3.131072] PM: Adding info for No Bus:ttyv0
[    3.131833] device: 'ttyv1': device_add
[    3.132640] PM: Adding info for No Bus:ttyv1
[    3.133411] device: 'ttyv2': device_add
[    3.134216] PM: Adding info for No Bus:ttyv2
[    3.134993] device: 'ttyv3': device_add
[    3.135811] PM: Adding info for No Bus:ttyv3
[    3.136586] device: 'ttyv4': device_add
[    3.137405] PM: Adding info for No Bus:ttyv4
[    3.138199] device: 'ttyv5': device_add
[    3.139049] PM: Adding info for No Bus:ttyv5
[    3.139809] device: 'ttyv6': device_add
[    3.140609] PM: Adding info for No Bus:ttyv6
[    3.141380] device: 'ttyv7': device_add
[    3.142184] PM: Adding info for No Bus:ttyv7
[    3.142935] device: 'ttyv8': device_add
[    3.143734] PM: Adding info for No Bus:ttyv8
[    3.144503] device: 'ttyv9': device_add
[    3.145346] PM: Adding info for No Bus:ttyv9
[    3.146113] device: 'ttyva': device_add
[    3.146969] PM: Adding info for No Bus:ttyva
[    3.147764] device: 'ttyvb': device_add
[    3.148581] PM: Adding info for No Bus:ttyvb
[    3.149354] device: 'ttyvc': device_add
[    3.150167] PM: Adding info for No Bus:ttyvc
[    3.150921] device: 'ttyvd': device_add
[    3.151720] PM: Adding info for No Bus:ttyvd
[    3.152487] device: 'ttyve': device_add
[    3.153305] PM: Adding info for No Bus:ttyve
[    3.154068] device: 'ttyvf': device_add
[    3.154853] PM: Adding info for No Bus:ttyvf
[    3.155640] device: 'ttyw0': device_add
[    3.156463] PM: Adding info for No Bus:ttyw0
[    3.157228] device: 'ttyw1': device_add
[    3.158047] PM: Adding info for No Bus:ttyw1
[    3.158811] device: 'ttyw2': device_add
[    3.159612] PM: Adding info for No Bus:ttyw2
[    3.160482] device: 'ttyw3': device_add
[    3.161305] PM: Adding info for No Bus:ttyw3
[    3.162068] device: 'ttyw4': device_add
[    3.162865] PM: Adding info for No Bus:ttyw4
[    3.163662] device: 'ttyw5': device_add
[    3.164492] PM: Adding info for No Bus:ttyw5
[    3.165281] device: 'ttyw6': device_add
[    3.166075] PM: Adding info for No Bus:ttyw6
[    3.166822] device: 'ttyw7': device_add
[    3.167636] PM: Adding info for No Bus:ttyw7
[    3.168423] device: 'ttyw8': device_add
[    3.169224] PM: Adding info for No Bus:ttyw8
[    3.169973] device: 'ttyw9': device_add
[    3.170771] PM: Adding info for No Bus:ttyw9
[    3.171542] device: 'ttywa': device_add
[    3.172363] PM: Adding info for No Bus:ttywa
[    3.173138] device: 'ttywb': device_add
[    3.173969] PM: Adding info for No Bus:ttywb
[    3.174743] device: 'ttywc': device_add
[    3.175560] PM: Adding info for No Bus:ttywc
[    3.176335] device: 'ttywd': device_add
[    3.177122] PM: Adding info for No Bus:ttywd
[    3.177869] device: 'ttywe': device_add
[    3.178728] PM: Adding info for No Bus:ttywe
[    3.179501] device: 'ttywf': device_add
[    3.180324] PM: Adding info for No Bus:ttywf
[    3.181097] device: 'ttyx0': device_add
[    3.181886] PM: Adding info for No Bus:ttyx0
[    3.182669] device: 'ttyx1': device_add
[    3.183486] PM: Adding info for No Bus:ttyx1
[    3.184260] device: 'ttyx2': device_add
[    3.185075] PM: Adding info for No Bus:ttyx2
[    3.185852] device: 'ttyx3': device_add
[    3.186673] PM: Adding info for No Bus:ttyx3
[    3.187448] device: 'ttyx4': device_add
[    3.188278] PM: Adding info for No Bus:ttyx4
[    3.189057] device: 'ttyx5': device_add
[    3.189860] PM: Adding info for No Bus:ttyx5
[    3.190728] device: 'ttyx6': device_add
[    3.191556] PM: Adding info for No Bus:ttyx6
[    3.192328] device: 'ttyx7': device_add
[    3.193118] PM: Adding info for No Bus:ttyx7
[    3.193864] device: 'ttyx8': device_add
[    3.194678] PM: Adding info for No Bus:ttyx8
[    3.195450] device: 'ttyx9': device_add
[    3.196237] PM: Adding info for No Bus:ttyx9
[    3.196990] device: 'ttyxa': device_add
[    3.197806] PM: Adding info for No Bus:ttyxa
[    3.198602] device: 'ttyxb': device_add
[    3.199416] PM: Adding info for No Bus:ttyxb
[    3.200181] device: 'ttyxc': device_add
[    3.201041] PM: Adding info for No Bus:ttyxc
[    3.201799] device: 'ttyxd': device_add
[    3.202616] PM: Adding info for No Bus:ttyxd
[    3.203391] device: 'ttyxe': device_add
[    3.204178] PM: Adding info for No Bus:ttyxe
[    3.204924] device: 'ttyxf': device_add
[    3.205742] PM: Adding info for No Bus:ttyxf
[    3.206517] device: 'ttyy0': device_add
[    3.207337] PM: Adding info for No Bus:ttyy0
[    3.208110] device: 'ttyy1': device_add
[    3.208922] PM: Adding info for No Bus:ttyy1
[    3.209699] device: 'ttyy2': device_add
[    3.210500] PM: Adding info for No Bus:ttyy2
[    3.211301] device: 'ttyy3': device_add
[    3.212108] PM: Adding info for No Bus:ttyy3
[    3.212857] device: 'ttyy4': device_add
[    3.213688] PM: Adding info for No Bus:ttyy4
[    3.214477] device: 'ttyy5': device_add
[    3.215284] PM: Adding info for No Bus:ttyy5
[    3.216061] device: 'ttyy6': device_add
[    3.216874] PM: Adding info for No Bus:ttyy6
[    3.217663] device: 'ttyy7': device_add
[    3.218473] PM: Adding info for No Bus:ttyy7
[    3.219230] device: 'ttyy8': device_add
[    3.220127] PM: Adding info for No Bus:ttyy8
[    3.220874] device: 'ttyy9': device_add
[    3.221673] PM: Adding info for No Bus:ttyy9
[    3.222442] device: 'ttyya': device_add
[    3.223257] PM: Adding info for No Bus:ttyya
[    3.224007] device: 'ttyyb': device_add
[    3.224836] PM: Adding info for No Bus:ttyyb
[    3.225632] device: 'ttyyc': device_add
[    3.226435] PM: Adding info for No Bus:ttyyc
[    3.227195] device: 'ttyyd': device_add
[    3.228054] PM: Adding info for No Bus:ttyyd
[    3.228816] device: 'ttyye': device_add
[    3.229618] PM: Adding info for No Bus:ttyye
[    3.230390] device: 'ttyyf': device_add
[    3.231192] PM: Adding info for No Bus:ttyyf
[    3.231942] device: 'ttyz0': device_add
[    3.232747] PM: Adding info for No Bus:ttyz0
[    3.233533] device: 'ttyz1': device_add
[    3.234356] PM: Adding info for No Bus:ttyz1
[    3.235127] device: 'ttyz2': device_add
[    3.235925] PM: Adding info for No Bus:ttyz2
[    3.236729] device: 'ttyz3': device_add
[    3.237534] PM: Adding info for No Bus:ttyz3
[    3.238318] device: 'ttyz4': device_add
[    3.239141] PM: Adding info for No Bus:ttyz4
[    3.239909] device: 'ttyz5': device_add
[    3.240714] PM: Adding info for No Bus:ttyz5
[    3.241490] device: 'ttyz6': device_add
[    3.242330] PM: Adding info for No Bus:ttyz6
[    3.243105] device: 'ttyz7': device_add
[    3.243887] PM: Adding info for No Bus:ttyz7
[    3.244662] device: 'ttyz8': device_add
[    3.245479] PM: Adding info for No Bus:ttyz8
[    3.246252] device: 'ttyz9': device_add
[    3.247062] PM: Adding info for No Bus:ttyz9
[    3.247833] device: 'ttyza': device_add
[    3.248654] PM: Adding info for No Bus:ttyza
[    3.249430] device: 'ttyzb': device_add
[    3.250350] PM: Adding info for No Bus:ttyzb
[    3.251129] device: 'ttyzc': device_add
[    3.251924] PM: Adding info for No Bus:ttyzc
[    3.252764] device: 'ttyzd': device_add
[    3.253575] PM: Adding info for No Bus:ttyzd
[    3.254348] device: 'ttyze': device_add
[    3.255179] PM: Adding info for No Bus:ttyze
[    3.255925] device: 'ttyzf': device_add
[    3.256742] PM: Adding info for No Bus:ttyzf
[    3.257515] device: 'ttya0': device_add
[    3.258340] PM: Adding info for No Bus:ttya0
[    3.259116] device: 'ttya1': device_add
[    3.259921] PM: Adding info for No Bus:ttya1
[    3.260709] device: 'ttya2': device_add
[    3.261648] PM: Adding info for No Bus:ttya2
[    3.262544] device: 'ttya3': device_add
[    3.263339] PM: Adding info for No Bus:ttya3
[    3.264096] device: 'ttya4': device_add
[    3.264905] PM: Adding info for No Bus:ttya4
[    3.265696] device: 'ttya5': device_add
[    3.266503] PM: Adding info for No Bus:ttya5
[    3.267272] device: 'ttya6': device_add
[    3.268087] PM: Adding info for No Bus:ttya6
[    3.268868] device: 'ttya7': device_add
[    3.269703] PM: Adding info for No Bus:ttya7
[    3.270478] device: 'ttya8': device_add
[    3.271276] PM: Adding info for No Bus:ttya8
[    3.272052] device: 'ttya9': device_add
[    3.272845] PM: Adding info for No Bus:ttya9
[    3.273623] device: 'ttyaa': device_add
[    3.274436] PM: Adding info for No Bus:ttyaa
[    3.275196] device: 'ttyab': device_add
[    3.275993] PM: Adding info for No Bus:ttyab
[    3.276789] device: 'ttyac': device_add
[    3.277602] PM: Adding info for No Bus:ttyac
[    3.278396] device: 'ttyad': device_add
[    3.279199] PM: Adding info for No Bus:ttyad
[    3.280051] device: 'ttyae': device_add
[    3.280842] PM: Adding info for No Bus:ttyae
[    3.281620] device: 'ttyaf': device_add
[    3.282459] PM: Adding info for No Bus:ttyaf
[    3.283221] device: 'ttyb0': device_add
[    3.284050] PM: Adding info for No Bus:ttyb0
[    3.284810] device: 'ttyb1': device_add
[    3.285620] PM: Adding info for No Bus:ttyb1
[    3.286407] device: 'ttyb2': device_add
[    3.287211] PM: Adding info for No Bus:ttyb2
[    3.287992] device: 'ttyb3': device_add
[    3.288811] PM: Adding info for No Bus:ttyb3
[    3.289586] device: 'ttyb4': device_add
[    3.290418] PM: Adding info for No Bus:ttyb4
[    3.291193] device: 'ttyb5': device_add
[    3.291993] PM: Adding info for No Bus:ttyb5
[    3.292768] device: 'ttyb6': device_add
[    3.293573] PM: Adding info for No Bus:ttyb6
[    3.294355] device: 'ttyb7': device_add
[    3.295176] PM: Adding info for No Bus:ttyb7
[    3.295928] device: 'ttyb8': device_add
[    3.296729] PM: Adding info for No Bus:ttyb8
[    3.297501] device: 'ttyb9': device_add
[    3.298313] PM: Adding info for No Bus:ttyb9
[    3.299074] device: 'ttyba': device_add
[    3.299866] PM: Adding info for No Bus:ttyba
[    3.300641] device: 'ttybb': device_add
[    3.301461] PM: Adding info for No Bus:ttybb
[    3.302233] device: 'ttybc': device_add
[    3.303068] PM: Adding info for No Bus:ttybc
[    3.303835] device: 'ttybd': device_add
[    3.304728] PM: Adding info for No Bus:ttybd
[    3.305502] device: 'ttybe': device_add
[    3.306321] PM: Adding info for No Bus:ttybe
[    3.307083] device: 'ttybf': device_add
[    3.307856] PM: Adding info for No Bus:ttybf
[    3.308640] device: 'ttyc0': device_add
[    3.309499] PM: Adding info for No Bus:ttyc0
[    3.310355] device: 'ttyc1': device_add
[    3.311156] PM: Adding info for No Bus:ttyc1
[    3.311918] device: 'ttyc2': device_add
[    3.312748] PM: Adding info for No Bus:ttyc2
[    3.313556] device: 'ttyc3': device_add
[    3.314375] PM: Adding info for No Bus:ttyc3
[    3.315139] device: 'ttyc4': device_add
[    3.315940] PM: Adding info for No Bus:ttyc4
[    3.316729] device: 'ttyc5': device_add
[    3.317548] PM: Adding info for No Bus:ttyc5
[    3.318336] device: 'ttyc6': device_add
[    3.319122] PM: Adding info for No Bus:ttyc6
[    3.319886] device: 'ttyc7': device_add
[    3.320716] PM: Adding info for No Bus:ttyc7
[    3.321504] device: 'ttyc8': device_add
[    3.322457] PM: Adding info for No Bus:ttyc8
[    3.323219] device: 'ttyc9': device_add
[    3.324042] PM: Adding info for No Bus:ttyc9
[    3.324789] device: 'ttyca': device_add
[    3.325603] PM: Adding info for No Bus:ttyca
[    3.326379] device: 'ttycb': device_add
[    3.327170] PM: Adding info for No Bus:ttycb
[    3.327920] device: 'ttycc': device_add
[    3.328761] PM: Adding info for No Bus:ttycc
[    3.329549] device: 'ttycd': device_add
[    3.330363] PM: Adding info for No Bus:ttycd
[    3.331123] device: 'ttyce': device_add
[    3.331922] PM: Adding info for No Bus:ttyce
[    3.332702] device: 'ttycf': device_add
[    3.333503] PM: Adding info for No Bus:ttycf
[    3.334271] device: 'ttyd0': device_add
[    3.335081] PM: Adding info for No Bus:ttyd0
[    3.335829] device: 'ttyd1': device_add
[    3.336684] PM: Adding info for No Bus:ttyd1
[    3.337470] device: 'ttyd2': device_add
[    3.338300] PM: Adding info for No Bus:ttyd2
[    3.339105] device: 'ttyd3': device_add
[    3.339995] PM: Adding info for No Bus:ttyd3
[    3.340855] device: 'ttyd4': device_add
[    3.341674] PM: Adding info for No Bus:ttyd4
[    3.342472] device: 'ttyd5': device_add
[    3.343288] PM: Adding info for No Bus:ttyd5
[    3.344059] device: 'ttyd6': device_add
[    3.344840] PM: Adding info for No Bus:ttyd6
[    3.345615] device: 'ttyd7': device_add
[    3.346447] PM: Adding info for No Bus:ttyd7
[    3.347222] device: 'ttyd8': device_add
[    3.348047] PM: Adding info for No Bus:ttyd8
[    3.348813] device: 'ttyd9': device_add
[    3.349614] PM: Adding info for No Bus:ttyd9
[    3.350387] device: 'ttyda': device_add
[    3.351193] PM: Adding info for No Bus:ttyda
[    3.351943] device: 'ttydb': device_add
[    3.352741] PM: Adding info for No Bus:ttydb
[    3.353513] device: 'ttydc': device_add
[    3.354341] PM: Adding info for No Bus:ttydc
[    3.355115] device: 'ttydd': device_add
[    3.355904] PM: Adding info for No Bus:ttydd
[    3.356689] device: 'ttyde': device_add
[    3.357507] PM: Adding info for No Bus:ttyde
[    3.358294] device: 'ttydf': device_add
[    3.359094] PM: Adding info for No Bus:ttydf
[    3.359847] device: 'ttye0': device_add
[    3.360646] PM: Adding info for No Bus:ttye0
[    3.361419] device: 'ttye1': device_add
[    3.362223] PM: Adding info for No Bus:ttye1
[    3.362981] device: 'ttye2': device_add
[    3.363838] PM: Adding info for No Bus:ttye2
[    3.364654] device: 'ttye3': device_add
[    3.365489] PM: Adding info for No Bus:ttye3
[    3.366263] device: 'ttye4': device_add
[    3.367075] PM: Adding info for No Bus:ttye4
[    3.367841] device: 'ttye5': device_add
[    3.368679] PM: Adding info for No Bus:ttye5
[    3.369456] device: 'ttye6': device_add
[    3.370333] PM: Adding info for No Bus:ttye6
[    3.371096] device: 'ttye7': device_add
[    3.371877] PM: Adding info for No Bus:ttye7
[    3.372661] device: 'ttye8': device_add
[    3.373488] PM: Adding info for No Bus:ttye8
[    3.374270] device: 'ttye9': device_add
[    3.375069] PM: Adding info for No Bus:ttye9
[    3.375820] device: 'ttyea': device_add
[    3.376636] PM: Adding info for No Bus:ttyea
[    3.377409] device: 'ttyeb': device_add
[    3.378198] PM: Adding info for No Bus:ttyeb
[    3.378959] device: 'ttyec': device_add
[    3.379776] PM: Adding info for No Bus:ttyec
[    3.380548] device: 'ttyed': device_add
[    3.381374] PM: Adding info for No Bus:ttyed
[    3.382157] device: 'ttyee': device_add
[    3.382944] PM: Adding info for No Bus:ttyee
[    3.383718] device: 'ttyef': device_add
[    3.384533] PM: Adding info for No Bus:ttyef
[    3.385305] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    3.386419] Registering platform device 'serial8250'. Parent at platform
[    3.387574] device: 'serial8250': device_add
[    3.388345] bus: 'platform': add device serial8250
[    3.389203] PM: Adding info for platform:serial8250
[    3.414529] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    3.415600] device: 'ttyS0': device_add
[    3.416501] PM: Adding info for No Bus:ttyS0
[    3.417378] device: 'ttyS1': device_add
[    3.418315] PM: Adding info for No Bus:ttyS1
[    3.419133] device: 'ttyS2': device_add
[    3.419926] PM: Adding info for No Bus:ttyS2
[    3.420806] device: 'ttyS3': device_add
[    3.421680] PM: Adding info for No Bus:ttyS3
[    3.422475] bus: 'platform': add driver serial8250
[    3.423329] bus: 'platform': driver_probe_device: matched device serial8250 with driver serial8250
[    3.424864] bus: 'platform': really_probe: probing driver serial8250 with device serial8250
[    3.426333] driver: 'serial8250': driver_bound: bound to device 'serial8250'
[    3.427551] bus: 'platform': really_probe: bound device serial8250 to driver serial8250
[    3.428975] device: 'ttyprintk': device_add
[    3.429964] PM: Adding info for No Bus:ttyprintk
[    3.430795] bus: 'platform': add driver tpm_tis
[    3.431616] Registering platform device 'tpm_tis'. Parent at platform
[    3.432728] device: 'tpm_tis': device_add
[    3.433455] bus: 'platform': add device tpm_tis
[    3.434306] PM: Adding info for platform:tpm_tis
[    3.435137] bus: 'platform': driver_probe_device: matched device tpm_tis with driver tpm_tis
[    3.436587] bus: 'platform': really_probe: probing driver tpm_tis with device tpm_tis
[    3.437930] driver: 'tpm_tis': driver_bound: bound to device 'tpm_tis'
[    3.439067] bus: 'platform': really_probe: bound device tpm_tis to driver tpm_tis
[    3.440362] device: 'tpm0': device_add
[    3.441156] PM: Adding info for No Bus:tpm0
[    4.195061] device: 'tpm0': device_unregister
[    4.195834] PM: Removing info for No Bus:tpm0
[    4.197167] device: 'tpm0': device_create_release
[    4.198234] PM: Removing info for platform:tpm_tis
[    4.199181] bus: 'platform': remove device tpm_tis
[    4.200169] bus: 'platform': remove driver tpm_tis
[    4.201056] driver: 'tpm_tis': driver_release
[    4.201866] Registering platform device 'i8042'. Parent at platform
[    4.202958] device: 'i8042': device_add
[    4.203650] bus: 'platform': add device i8042
[    4.204445] PM: Adding info for platform:i8042
[    4.205233] bus: 'platform': add driver i8042
[    4.205989] bus: 'platform': driver_probe_device: matched device i8042 with driver i8042
[    4.207383] bus: 'platform': really_probe: probing driver i8042 with device i8042
[    4.209696] serio: i8042 KBD port at 0x60,0x64 irq 1
[    4.210710] serio: i8042 AUX port at 0x60,0x64 irq 12
[    4.211694] device: 'serio0': device_add
[    4.212434] bus: 'serio': add device serio0
[    4.213211] PM: Adding info for serio:serio0
[    4.214044] driver: 'i8042': driver_bound: bound to device 'i8042'
[    4.215125] device: 'serio1': device_add
[    4.215818] bus: 'serio': add device serio1
[    4.216637] PM: Adding info for serio:serio1
[    4.217436] bus: 'platform': really_probe: bound device i8042 to driver i8042
[    4.218699] bus: 'serio': add driver atkbd
[    4.219484] cpuidle: using governor ladder
[    4.220333] 
[    4.220333] printing PIC contents
[    4.221167] ... PIC  IMR: fffb
[    4.221702] ... PIC  IRR: 1013
[    4.222263] ... PIC  ISR: 0000
[    4.222790] ... PIC ELCR: 0c00
[    4.223345] printing local APIC contents on CPU#0/0:
[    4.224185] ... APIC ID:      00000000 (0)
[    4.224329] ... APIC VERSION: 00050014
[    4.224329] ... APIC TASKPRI: 00000000 (00)
[    4.224329] ... APIC PROCPRI: 00000000
[    4.224329] ... APIC LDR: 01000000
[    4.224329] ... APIC DFR: ffffffff
[    4.224329] ... APIC SPIV: 000001ff
[    4.224329] ... APIC ISR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC TMR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC IRR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000020008000
[    4.224329] ... APIC ESR: 00000000
[    4.224329] ... APIC ICR: 00000841
[    4.224329] ... APIC ICR2: 01000000
[    4.224329] ... APIC LVTT: 000000ef
[    4.224329] ... APIC LVTPC: 00010000
[    4.224329] ... APIC LVT0: 00010700
[    4.224329] ... APIC LVT1: 00000400
[    4.224329] ... APIC LVTERR: 000000fe
[    4.224329] ... APIC TMICT: 0000a2d2
[    4.224329] ... APIC TMCCT: 00000000
[    4.224329] ... APIC TDCR: 00000003
[    4.224329] 
[    4.241632] number of MP IRQ sources: 20.
[    4.242350] number of IO-APIC #2 registers: 24.
[    4.243145] testing the IO APIC.......................
[    4.244064] IO APIC #2......
[    4.244565] .... register #00: 00000000
[    4.245234] .......    : physical APIC id: 00
[    4.245976] .......    : Delivery Type: 0
[    4.246686] .......    : LTS          : 0
[    4.247398] .... register #01: 00170011
[    4.248068] .......     : max redirection entries: 17
[    4.248936] .......     : PRQ implemented: 0
[    4.249687] .......     : IO APIC version: 11
[    4.250456] .... register #02: 00000000
[    4.251139] .......     : arbitration: 00
[    4.251841] .... IRQ redirection table:
[    4.252600]  NR Dst Mask Trig IRR Pol Stat Dmod Deli Vect:
[    4.253557]  00 00  1    0    0   0   0    0    0    00
[    4.254494]  01 03  0    0    0   0   0    1    1    41
[    4.255445]  02 03  0    0    0   0   0    1    1    51
[    4.256380]  03 01  0    0    0   0   0    1    1    61
[    4.257321]  04 01  1    0    0   0   0    1    1    71
[    4.258269]  05 01  0    0    0   0   0    1    1    81
[    4.259201]  06 01  0    0    0   0   0    1    1    91
[    4.260156]  07 01  0    0    0   0   0    1    1    A1
[    4.261116]  08 01  0    0    0   0   0    1    1    B1
[    4.262064]  09 03  1    1    0   0   0    1    1    C1
[    4.263075]  0a 03  1    1    0   0   0    1    1    D1
[    4.263993]  0b 03  1    1    0   0   0    1    1    E1
[    4.264927]  0c 03  0    0    0   0   0    1    1    22
[    4.265864]  0d 01  0    0    0   0   0    1    1    42
[    4.266798]  0e 01  0    0    0   0   0    1    1    52
[    4.267738]  0f 01  0    0    0   0   0    1    1    62
[    4.268703]  10 00  1    0    0   0   0    0    0    00
[    4.269711]  11 00  1    0    0   0   0    0    0    00
[    4.270679]  12 00  1    0    0   0   0    0    0    00
[    4.271623]  13 00  1    0    0   0   0    0    0    00
[    4.272560]  14 00  1    0    0   0   0    0    0    00
[    4.273498]  15 00  1    0    0   0   0    0    0    00
[    4.274437]  16 00  1    0    0   0   0    0    0    00
[    4.275374]  17 00  1    0    0   0   0    0    0    00
[    4.276302] IRQ to pin mappings:
[    4.276861] IRQ0 -> 0:2
[    4.277369] IRQ1 -> 0:1
[    4.277846] IRQ3 -> 0:3
[    4.278368] IRQ4 -> 0:4
[    4.278840] IRQ5 -> 0:5
[    4.279341] IRQ6 -> 0:6
[    4.279806] IRQ7 -> 0:7
[    4.280307] IRQ8 -> 0:8
[    4.280772] IRQ12 -> 0:12
[    4.281299] IRQ13 -> 0:13
[    4.281794] IRQ14 -> 0:14
[    4.282323] IRQ15 -> 0:15
[    4.282818] IRQ33 -> 0:9
[    4.283330] IRQ34 -> 0:10
[    4.283821] IRQ35 -> 0:11
[    4.284346] .................................... done.
[    4.285272] bus: 'serio': driver_probe_device: matched device serio0 with driver atkbd
[    4.285342] device: 'cpu_dma_latency': device_add
[    4.285428] PM: Adding info for No Bus:cpu_dma_latency
[    4.285464] device: 'network_latency': device_add
[    4.285544] PM: Adding info for No Bus:network_latency
[    4.285575] device: 'network_throughput': device_add
[    4.285639] PM: Adding info for No Bus:network_throughput
[    4.285682] PM: Hibernation image not present or could not be loaded.
[    4.285721] registered taskstats version 1
[    4.285723] Running tests on trace events:
[    4.285725] Testing event kfree_skb: [    4.294208] bus: 'serio': really_probe: probing driver atkbd with device serio0
[    4.297195] device: 'input0': device_add
[    4.298042] PM: Adding info for No Bus:input0
[    4.298925] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    4.300637] driver: 'serio0': driver_bound: bound to device 'atkbd'
[    4.300706] Testing event consume_skb: OK
[    4.302376] bus: 'serio': really_probe: bound device serio0 to driver atkbd
[    4.303686] bus: 'serio': driver_probe_device: matched device serio1 with driver atkbd
[    4.305078] bus: 'serio': really_probe: probing driver atkbd with device serio1
[    4.306670] atkbd: probe of serio1 rejects match -19
[    4.308159] OK
[    4.308516] Testing event skb_copy_datagram_iovec: OK
[    4.313332] Testing event net_dev_xmit: OK
[    4.318324] Testing event net_dev_queue: OK
[    4.323321] Testing event netif_receive_skb: OK
[    4.328338] Testing event netif_rx: OK
[    4.333307] Testing event napi_poll: OK
[    4.338312] Testing event sock_rcvqueue_full: OK
[    4.343327] Testing event sock_exceed_buf_limit: OK
[    4.348306] Testing event udp_fail_queue_rcv_skb: OK
[    4.353292] Testing event regmap_reg_write: OK
[    4.358307] Testing event regmap_reg_read: OK
[    4.363288] Testing event regmap_reg_read_cache: OK
[    4.368310] Testing event regmap_hw_read_start: OK
[    4.373288] Testing event regmap_hw_read_done: OK
[    4.378311] Testing event regmap_hw_write_start: OK
[    4.383292] Testing event regmap_hw_write_done: OK
[    4.388300] Testing event regcache_sync: OK
[    4.393289] Testing event regmap_cache_only: OK
[    4.398338] Testing event regmap_cache_bypass: OK
[    4.403288] Testing event mix_pool_bytes: OK
[    4.408307] Testing event mix_pool_bytes_nolock: OK
[    4.413289] Testing event credit_entropy_bits: OK
[    4.418306] Testing event get_random_bytes: OK
[    4.423309] Testing event extract_entropy: OK
[    4.428309] Testing event extract_entropy_user: OK
[    4.433289] Testing event regulator_enable: OK
[    4.438303] Testing event regulator_enable_delay: OK
[    4.443323] Testing event regulator_enable_complete: OK
[    4.448298] Testing event regulator_disable: OK
[    4.453291] Testing event regulator_disable_complete: OK
[    4.458307] Testing event regulator_set_voltage: OK
[    4.463288] Testing event regulator_set_voltage_complete: OK
[    4.468304] Testing event gpio_direction: OK
[    4.473295] Testing event gpio_value: OK
[    4.478304] Testing event block_rq_abort: OK
[    4.483238] Testing event block_rq_requeue: OK
[    4.488339] Testing event block_rq_complete: OK
[    4.493294] Testing event block_rq_insert: OK
[    4.498307] Testing event block_rq_issue: OK
[    4.503303] Testing event block_bio_bounce: OK
[    4.508298] Testing event block_bio_complete: OK
[    4.513292] Testing event block_bio_backmerge: OK
[    4.518301] Testing event block_bio_frontmerge: OK
[    4.523291] Testing event block_bio_queue: OK
[    4.528303] Testing event block_getrq: OK
[    4.533329] Testing event block_sleeprq: OK
[    4.538301] Testing event block_plug: OK
[    4.543289] Testing event block_unplug: OK
[    4.548309] Testing event block_split: OK
[    4.553295] Testing event block_bio_remap: OK
[    4.558301] Testing event block_rq_remap: OK
[    4.563298] Testing event writeback_nothread: OK
[    4.568301] Testing event writeback_queue: OK
[    4.573290] Testing event writeback_exec: OK
[    4.578330] Testing event writeback_start: OK
[    4.583291] Testing event writeback_written: OK
[    4.588303] Testing event writeback_wait: OK
[    4.593292] Testing event writeback_pages_written: OK
[    4.598237] Testing event writeback_nowork: OK
[    4.603288] Testing event writeback_wake_background: OK
[    4.608306] Testing event writeback_wake_thread: OK
[    4.613301] Testing event writeback_wake_forker_thread: OK
[    4.618306] Testing event writeback_bdi_register: OK
[    4.623290] Testing event writeback_bdi_unregister: OK
[    4.628303] Testing event writeback_thread_start: OK
[    4.633329] Testing event writeback_thread_stop: OK
[    4.638304] Testing event wbc_writepage: OK
[    4.643290] Testing event writeback_queue_io: OK
[    4.648306] Testing event global_dirty_state: OK
[    4.653237] Testing event bdi_dirty_ratelimit: OK
[    4.658295] Testing event balance_dirty_pages: OK
[    4.663256] Testing event writeback_sb_inodes_requeue: OK
[    4.668272] Testing event writeback_congestion_wait: OK
[    4.673256] Testing event writeback_wait_iff_congested: OK
[    4.678306] Testing event writeback_single_inode: OK
[    4.683271] Testing event mm_compaction_isolate_migratepages: OK
[    4.688266] Testing event mm_compaction_isolate_freepages: OK
[    4.693258] Testing event mm_compaction_migratepages: OK
[    4.698274] Testing event kmalloc: OK
[    4.703263] Testing event kmem_cache_alloc: OK
[    4.708277] Testing event kmalloc_node: OK
[    4.713254] Testing event kmem_cache_alloc_node: OK
[    4.718264] Testing event kfree: OK
[    4.722270] Testing event kmem_cache_free: OK
[    4.727261] Testing event mm_page_free: OK
[    4.732307] Testing event mm_page_free_batched: OK
[    4.737256] Testing event mm_page_alloc: OK
[    4.742272] Testing event mm_page_alloc_zone_locked: OK
[    4.747257] Testing event mm_page_pcpu_drain: OK
[    4.752254] Testing event mm_page_alloc_extfrag: OK
[    4.757256] Testing event mm_vmscan_kswapd_sleep: OK
[    4.762257] Testing event mm_vmscan_kswapd_wake: OK
[    4.767263] Testing event mm_vmscan_wakeup_kswapd: OK
[    4.772256] Testing event mm_vmscan_direct_reclaim_begin: OK
[    4.777293] Testing event mm_vmscan_memcg_reclaim_begin: OK
[    4.782256] Testing event mm_vmscan_memcg_softlimit_reclaim_begin: OK
[    4.787260] Testing event mm_vmscan_direct_reclaim_end: OK
[    4.792254] Testing event mm_vmscan_memcg_reclaim_end: OK
[    4.797258] Testing event mm_vmscan_memcg_softlimit_reclaim_end: OK
[    4.802258] Testing event mm_shrink_slab_start: OK
[    4.807254] Testing event mm_shrink_slab_end: OK
[    4.812267] Testing event mm_vmscan_lru_isolate: OK
[    4.817256] Testing event mm_vmscan_memcg_isolate: OK
[    4.822294] Testing event mm_vmscan_writepage: OK
[    4.827257] Testing event mm_vmscan_lru_shrink_inactive: OK
[    4.832255] Testing event oom_score_adj_update: OK
[    4.837272] Testing event rpm_suspend: OK
[    4.842266] Testing event rpm_resume: OK
[    4.847254] Testing event rpm_idle: OK
[    4.852275] Testing event rpm_return_int: OK
[    4.857259] Testing event cpu_idle: OK
[    4.862274] Testing event cpu_frequency: OK
[    4.867257] Testing event machine_suspend: OK
[    4.872277] Testing event wakeup_source_activate: OK
[    4.877254] Testing event wakeup_source_deactivate: OK
[    4.882258] Testing event clock_enable: OK
[    4.887259] Testing event clock_disable: OK
[    4.892270] Testing event clock_set_rate: OK
[    4.897258] Testing event power_domain_target: OK
[    4.902257] Testing event ftrace_test_filter: OK
[    4.907300] Testing event module_load: OK
[    4.912275] Testing event module_free: OK
[    4.917497] Testing event module_request: OK
[    4.923568] Testing event lock_acquire: OK
[    4.928486] Testing event lock_release: OK
[    4.933310] Testing event sched_kthread_stop: OK
[    4.938267] Testing event sched_kthread_stop_ret: OK
[    4.943258] Testing event sched_wakeup: OK
[    4.948373] Testing event sched_wakeup_new: OK
[    4.953258] Testing event sched_switch: OK
[    4.958273] Testing event sched_migrate_task: OK
[    4.963253] Testing event sched_process_free: OK
[    4.968266] Testing event sched_process_exit: OK
[    4.973265] Testing event sched_wait_task: OK
[    4.978267] Testing event sched_process_wait: OK
[    4.983255] Testing event sched_process_fork: OK
[    4.988270] Testing event sched_process_exec: OK
[    4.993294] Testing event sched_stat_wait: OK
[    4.998277] Testing event sched_stat_sleep: OK
[    5.003261] Testing event sched_stat_iowait: OK
[    5.008266] Testing event sched_stat_blocked: OK
[    5.013261] Testing event sched_stat_runtime: OK
[    5.018276] Testing event sched_pi_setprio: OK
[    5.023255] Testing event rcu_utilization: OK
[    5.028279] Testing event rcu_grace_period: OK
[    5.033260] Testing event rcu_grace_period_init: OK
[    5.038307] Testing event rcu_preempt_task: OK
[    5.043266] Testing event rcu_unlock_preempted_task: OK
[    5.048267] Testing event rcu_quiescent_state_report: OK
[    5.053265] Testing event rcu_fqs: OK
[    5.058272] Testing event rcu_dyntick: OK
[    5.063270] Testing event rcu_prep_idle: OK
[    5.068278] Testing event rcu_callback: OK
[    5.073260] Testing event rcu_kfree_callback: OK
[    5.078266] Testing event rcu_batch_start: OK
[    5.083292] Testing event rcu_invoke_callback: OK
[    5.088274] Testing event rcu_invoke_kfree_callback: OK
[    5.093259] Testing event rcu_batch_end: OK
[    5.098278] Testing event rcu_torture_read: OK
[    5.103267] Testing event rcu_barrier: OK
[    5.108276] Testing event workqueue_queue_work: OK
[    5.113252] Testing event workqueue_activate_work: OK
[    5.118272] Testing event workqueue_execute_start: OK
[    5.123257] Testing event workqueue_execute_end: OK
[    5.128281] Testing event signal_generate: OK
[    5.133256] Testing event signal_deliver: OK
[    5.138276] Testing event timer_init: OK
[    5.143260] Testing event timer_start: OK
[    5.148265] Testing event timer_expire_entry: OK
[    5.153257] Testing event timer_expire_exit: OK
[    5.158277] Testing event timer_cancel: OK
[    5.163264] Testing event hrtimer_init: OK
[    5.168277] Testing event hrtimer_start: OK
[    5.173309] Testing event hrtimer_expire_entry: OK
[    5.178271] Testing event hrtimer_expire_exit: OK
[    5.183301] Testing event hrtimer_cancel: OK
[    5.188284] Testing event itimer_state: OK
[    5.193257] Testing event itimer_expire: OK
[    5.198282] Testing event irq_handler_entry: OK
[    5.203257] Testing event irq_handler_exit: OK
[    5.208266] Testing event softirq_entry: OK
[    5.213257] Testing event softirq_exit: OK
[    5.218277] Testing event softirq_raise: OK
[    5.223259] Testing event console: OK
[    5.228311] Testing event task_newtask: OK
[    5.233255] Testing event task_rename: OK
[    5.238268] Testing event sys_enter: OK
[    5.243258] Testing event sys_exit: OK
[    5.248272] Testing event emulate_vsyscall: OK
[    5.253276] Running tests on trace event systems:
[    5.254168] Testing event system skb: OK
[    5.259478] Testing event system net: OK
[    5.264351] Testing event system napi: OK
[    5.269296] Testing event system sock: OK
[    5.274296] Testing event system udp: OK
[    5.279472] Testing event system regmap: OK
[    5.284388] Testing event system random: OK
[    5.289346] Testing event system regulator: OK
[    5.294352] Testing event system gpio: OK
[    5.299290] Testing event system block: OK
[    5.304487] Testing event system writeback: OK
[    5.309652] Testing event system compaction: 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-13  2:08             ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-13  2:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	Tony Luck, linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

[-- Attachment #1: Type: text/plain, Size: 3837 bytes --]

On Thu, Jul 12, 2012 at 02:45:14PM -0700, Tejun Heo wrote:
> Hello, again.
> 
> On Thu, Jul 12, 2012 at 10:05:19AM -0700, Tejun Heo wrote:
> > On Thu, Jul 12, 2012 at 09:06:48PM +0800, Fengguang Wu wrote:
> > > [    0.207977] WARNING: at /c/kernel-tests/mm/kernel/workqueue.c:1217 worker_enter_idle+0x2b8/0x32b()
> > > [    0.207977] Modules linked in:
> > > [    0.207977] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-08414-g9645fff #15
> > > [    0.207977] Call Trace:
> > > [    0.207977]  [<ffffffff81087189>] ? worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff810559d9>] warn_slowpath_common+0xae/0xdb
> > > [    0.207977]  [<ffffffff81055a2e>] warn_slowpath_null+0x28/0x31
> > > [    0.207977]  [<ffffffff81087189>] worker_enter_idle+0x2b8/0x32b
> > > [    0.207977]  [<ffffffff81087222>] start_worker+0x26/0x42
> > > [    0.207977]  [<ffffffff81c8b261>] init_workqueues+0x2d2/0x59a
> > > [    0.207977]  [<ffffffff81c8af8f>] ? usermodehelper_init+0x8a/0x8a
> > > [    0.207977]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
> > > [    0.207977]  [<ffffffff81c6f650>] kernel_init+0x12e/0x3c1
> > > [    0.207977]  [<ffffffff814b9b74>] kernel_thread_helper+0x4/0x10
> > > [    0.207977]  [<ffffffff814b80b0>] ? retint_restore_args+0x13/0x13
> > > [    0.207977]  [<ffffffff81c6f522>] ? start_kernel+0x737/0x737
> > > [    0.207977]  [<ffffffff814b9b70>] ? gs_change+0x13/0x13
> > 
> > Yeah, I forgot to flip the WARN_ON_ONCE() condition so that it checks
> > nr_running before looking at pool->nr_running.  The warning is
> > spurious.  Will post fix soon.
> 
> I was wrong and am now dazed and confused.  That's from
> init_workqueues() where only cpu0 is running.  How the hell did
> nr_running manage to become non-zero at that point?  Can you please
> apply the following patch and report the boot log?  Thank you.

Tejun, here is the data I got:

[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED

The attached dmesg is not complete because, once get the oops message,
my script will kill the kvm to save time.

Thanks,
Fengguang

[-- Attachment #2: dmesg-kvm_bisect-waimea-27649-2012-07-13-08-34-35 --]
[-- Type: text/plain, Size: 93870 bytes --]

[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.5.0-rc6-bisect-next-20120712-dirty (wfg@bee) (gcc version 4.7.0 (Debian 4.7.1-1) ) #102 SMP Fri Jul 13 08:32:30 CST 2012
[    0.000000] Command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   Centaur CentaurHauls
[    0.000000] Disabled fast string operations
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009f3ff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009f400-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000000fffcfff] usable
[    0.000000] BIOS-e820: [mem 0x000000000fffd000-0x000000000fffffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000fffbc000-0x00000000ffffffff] reserved
[    0.000000] debug: ignoring loglevel setting.
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] e820: update [mem 0x00000000-0x0000ffff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0xfffd max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: write-back
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-BFFFF uncachable
[    0.000000]   C0000-FFFFF write-protect
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00E0000000 mask FFE0000000 uncachable
[    0.000000]   1 disabled
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] Scan for SMP in [mem 0x00000000-0x000003ff]
[    0.000000] Scan for SMP in [mem 0x0009fc00-0x0009ffff]
[    0.000000] Scan for SMP in [mem 0x000f0000-0x000fffff]
[    0.000000] found SMP MP-table at [mem 0x000f8860-0x000f886f] mapped at [ffff8800000f8860]
[    0.000000]   mpc: f8870-f898c
[    0.000000] initial memory mapped: [mem 0x00000000-0x1fffffff]
[    0.000000] Base memory trampoline at [ffff880000099000] 99000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x0fffcfff]
[    0.000000]  [mem 0x00000000-0x0fffcfff] page 4k
[    0.000000] kernel direct mapping tables up to 0xfffcfff @ [mem 0x0e854000-0x0e8d5fff]
[    0.000000] log_buf_len: 8388608
[    0.000000] early log buf free: 128176(97%)
[    0.000000] RAMDISK: [mem 0x0e8d6000-0x0ffeffff]
[    0.000000] No NUMA configuration found
[    0.000000] Faking a node at [mem 0x0000000000000000-0x000000000fffcfff]
[    0.000000] Initmem setup node 0 [mem 0x00000000-0x0fffcfff]
[    0.000000]   NODE_DATA [mem 0x0fff8000-0x0fffcfff]
[    0.000000] kvm-clock: Using msrs 12 and 11
[    0.000000] kvm-clock: cpu 0, msr 0:1c5fe01, boot clock
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00010000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00010000-0x0009efff]
[    0.000000]   node   0: [mem 0x00100000-0x0fffcfff]
[    0.000000] On node 0 totalpages: 65420
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 6 pages reserved
[    0.000000]   DMA zone: 3913 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 960 pages used for memmap
[    0.000000]   DMA32 zone: 60477 pages, LIFO batch:15
[    0.000000] Intel MultiProcessor Specification v1.4
[    0.000000]   mpc: f8870-f898c
[    0.000000] MPTABLE: OEM ID: BOCHSCPU
[    0.000000] MPTABLE: Product ID: 0.1         
[    0.000000] MPTABLE: APIC at: 0xFEE00000
[    0.000000] mapped APIC to ffffffffff5fb000 (        fee00000)
[    0.000000] Processor #0 (Bootup-CPU)
[    0.000000] Processor #1
[    0.000000] Bus #0 is PCI   
[    0.000000] Bus #1 is ISA   
[    0.000000] IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 04, APIC ID 2, APIC INT 09
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 0c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 10, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 14, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 18, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 1c, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 20, APIC ID 2, APIC INT 0b
[    0.000000] Int: type 0, pol 1, trig 0, bus 00, IRQ 24, APIC ID 2, APIC INT 0a
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 00, APIC ID 2, APIC INT 02
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 01, APIC ID 2, APIC INT 01
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 03, APIC ID 2, APIC INT 03
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 04, APIC ID 2, APIC INT 04
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 05, APIC ID 2, APIC INT 05
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 06, APIC ID 2, APIC INT 06
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 07, APIC ID 2, APIC INT 07
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 08, APIC ID 2, APIC INT 08
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0c, APIC ID 2, APIC INT 0c
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0d, APIC ID 2, APIC INT 0d
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0e, APIC ID 2, APIC INT 0e
[    0.000000] Int: type 0, pol 0, trig 0, bus 01, IRQ 0f, APIC ID 2, APIC INT 0f
[    0.000000] Lint: type 3, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 00
[    0.000000] Lint: type 1, pol 0, trig 0, bus 01, IRQ 00, APIC ID 0, APIC LINT 01
[    0.000000] Processors: 2
[    0.000000] smpboot: Allowing 2 CPUs, 0 hotplug CPUs
[    0.000000] mapped IOAPIC to ffffffffff5fa000 (fec00000)
[    0.000000] nr_irqs_gsi: 40
[    0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
[    0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
[    0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
[    0.000000] e820: [mem 0x10000000-0xfffbbfff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on KVM
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 26 pages/cpu @ffff88000dc00000 s76800 r8192 d21504 u1048576
[    0.000000] pcpu-alloc: s76800 r8192 d21504 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 
[    0.000000] kvm-clock: cpu 0, msr 0:dc11e01, primary cpu clock
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 64390
[    0.000000] Policy zone: DMA32
[    0.000000] Kernel command line: bisect-reboot x86_64-randconfig run_test= trinity=0 auth_hashtable_size=10 sunrpc.auth_hashtable_size=10 log_buf_len=8M ignore_loglevel debug sched_debug apic=debug dynamic_printk sysrq_always_enabled panic=10 hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic=1 nmi_watchdog=panic,lapic  prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal  root=/dev/ram0 rw BOOT_IMAGE=x86_64/vmlinuz-bisect
[    0.000000] PID hash table entries: 1024 (order: 1, 8192 bytes)
[    0.000000] __ex_table already sorted, skipping sort
[    0.000000] Memory: 200000k/262132k available (4835k kernel code, 452k absent, 61680k reserved, 7751k data, 568k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU debugfs-based tracing is enabled.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8 to nr_cpu_ids=2.
[    0.000000] NR_IRQS:4352 nr_irqs:56 16
[    0.000000] console [ttyS0] enabled
[    0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar
[    0.000000] ... MAX_LOCKDEP_SUBCLASSES:  8
[    0.000000] ... MAX_LOCK_DEPTH:          48
[    0.000000] ... MAX_LOCKDEP_KEYS:        8191
[    0.000000] ... CLASSHASH_SIZE:          4096
[    0.000000] ... MAX_LOCKDEP_ENTRIES:     16384
[    0.000000] ... MAX_LOCKDEP_CHAINS:      32768
[    0.000000] ... CHAINHASH_SIZE:          16384
[    0.000000]  memory used by lock dependency info: 5855 kB
[    0.000000]  per task-struct memory footprint: 1920 bytes
[    0.000000] ------------------------
[    0.000000] | Locking API testsuite:
[    0.000000] ----------------------------------------------------------------------------
[    0.000000]                                  | spin |wlock |rlock |mutex | wsem | rsem |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]                      A-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  A-B-B-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-B-C-C-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]              A-B-C-A-B-C deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-B-C-C-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-D-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]          A-B-C-D-B-C-D-A deadlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                     double unlock:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                   initialize held:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]                  bad unlock order:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]               recursive read-lock:             |  ok  |             |  ok  |
[    0.000000]            recursive read-lock #2:             |  ok  |             |  ok  |
[    0.000000]             mixed read-write-lock:             |  ok  |             |  ok  |
[    0.000000]             mixed write-read-lock:             |  ok  |             |  ok  |
[    0.000000]   --------------------------------------------------------------------------
[    0.000000]      hard-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/12:  ok  |  ok  |  ok  |
[    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
[    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #1/321:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/123:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/132:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/213:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/231:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/312:  ok  |  ok  |  ok  |
[    0.000000]     hard-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]     soft-safe-A + unsafe-B #2/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/123:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/132:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/213:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/231:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/312:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       soft-irq lock-inversion/321:  ok  |  ok  |  ok  |
[    0.000000]       hard-irq read-recursion/123:  ok  |
[    0.000000]       soft-irq read-recursion/123:  ok  |
[    0.000000]       hard-irq read-recursion/132:  ok  |
[    0.000000]       soft-irq read-recursion/132:  ok  |
[    0.000000]       hard-irq read-recursion/213:  ok  |
[    0.000000]       soft-irq read-recursion/213:  ok  |
[    0.000000]       hard-irq read-recursion/231:  ok  |
[    0.000000]       soft-irq read-recursion/231:  ok  |
[    0.000000]       hard-irq read-recursion/312:  ok  |
[    0.000000]       soft-irq read-recursion/312:  ok  |
[    0.000000]       hard-irq read-recursion/321:  ok  |
[    0.000000]       soft-irq read-recursion/321:  ok  |
[    0.000000] -------------------------------------------------------
[    0.000000] Good, all 218 testcases passed! |
[    0.000000] ---------------------------------
[    0.000000] tsc: Detected 3299.986 MHz processor
[    0.000999] Calibrating delay loop (skipped) preset value.. 6599.97 BogoMIPS (lpj=3299986)
[    0.002008] pid_max: default: 32768 minimum: 301
[    0.003176] Security Framework initialized
[    0.004304] Dentry cache hash table entries: 32768 (order: 6, 262144 bytes)
[    0.006232] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[    0.007245] Mount-cache hash table entries: 256
[    0.010107] Initializing cgroup subsys debug
[    0.010876] Initializing cgroup subsys freezer
[    0.011009] Initializing cgroup subsys perf_event
[    0.012104] Disabled fast string operations
[    0.014242] ftrace: allocating 10983 entries in 43 pages
[    0.020312] Getting VERSION: 50014
[    0.021011] Getting VERSION: 50014
[    0.021605] Getting ID: 0
[    0.022010] Getting ID: ff000000
[    0.022583] Getting LVT0: 8700
[    0.023008] Getting LVT1: 8400
[    0.023589] enabled ExtINT on CPU#0
[    0.025253] ENABLING IO-APIC IRQs
[    0.025839] init IO_APIC IRQs
[    0.026007]  apic 2 pin 0 not connected
[    0.027032] IOAPIC[0]: Set routing entry (2-1 -> 0x41 -> IRQ 1 Mode:0 Active:0 Dest:1)
[    0.028026] IOAPIC[0]: Set routing entry (2-2 -> 0x51 -> IRQ 0 Mode:0 Active:0 Dest:1)
[    0.029033] IOAPIC[0]: Set routing entry (2-3 -> 0x61 -> IRQ 3 Mode:0 Active:0 Dest:1)
[    0.030043] IOAPIC[0]: Set routing entry (2-4 -> 0x71 -> IRQ 4 Mode:0 Active:0 Dest:1)
[    0.031022] IOAPIC[0]: Set routing entry (2-5 -> 0x81 -> IRQ 5 Mode:0 Active:0 Dest:1)
[    0.033031] IOAPIC[0]: Set routing entry (2-6 -> 0x91 -> IRQ 6 Mode:0 Active:0 Dest:1)
[    0.034022] IOAPIC[0]: Set routing entry (2-7 -> 0xa1 -> IRQ 7 Mode:0 Active:0 Dest:1)
[    0.036021] IOAPIC[0]: Set routing entry (2-8 -> 0xb1 -> IRQ 8 Mode:0 Active:0 Dest:1)
[    0.037028] IOAPIC[0]: Set routing entry (2-9 -> 0xc1 -> IRQ 33 Mode:1 Active:0 Dest:1)
[    0.038025] IOAPIC[0]: Set routing entry (2-10 -> 0xd1 -> IRQ 34 Mode:1 Active:0 Dest:1)
[    0.040023] IOAPIC[0]: Set routing entry (2-11 -> 0xe1 -> IRQ 35 Mode:1 Active:0 Dest:1)
[    0.041019] IOAPIC[0]: Set routing entry (2-12 -> 0x22 -> IRQ 12 Mode:0 Active:0 Dest:1)
[    0.043020] IOAPIC[0]: Set routing entry (2-13 -> 0x42 -> IRQ 13 Mode:0 Active:0 Dest:1)
[    0.044021] IOAPIC[0]: Set routing entry (2-14 -> 0x52 -> IRQ 14 Mode:0 Active:0 Dest:1)
[    0.046005] IOAPIC[0]: Set routing entry (2-15 -> 0x62 -> IRQ 15 Mode:0 Active:0 Dest:1)
[    0.047016]  apic 2 pin 16 not connected
[    0.048002]  apic 2 pin 17 not connected
[    0.048693]  apic 2 pin 18 not connected
[    0.049001]  apic 2 pin 19 not connected
[    0.050001]  apic 2 pin 20 not connected
[    0.050681]  apic 2 pin 21 not connected
[    0.051001]  apic 2 pin 22 not connected
[    0.052001]  apic 2 pin 23 not connected
[    0.052857] ..TIMER: vector=0x51 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.054000] smpboot: CPU0: Intel Common KVM processor stepping 01
[    0.056001] Using local APIC timer interrupts.
[    0.056001] calibrating APIC timer ...
[    0.057995] ... lapic delta = 6248865
[    0.057995] ..... delta 6248865
[    0.057995] ..... mult: 268427509
[    0.057995] ..... calibration result: 999818
[    0.057995] ..... CPU clock speed is 3299.0401 MHz.
[    0.057995] ..... host bus clock speed is 999.0818 MHz.
[    0.057995] ... verify APIC timer
[    0.164423] ... jiffies delta = 100
[    0.164989] ... jiffies result ok
[    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
[    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
[    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
[    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
[    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
[    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
[    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
[    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
[    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438
[    0.175416] ------------[ cut here ]------------
[    0.175981] WARNING: at /c/wfg/linux/kernel/workqueue.c:1220 worker_enter_idle+0x2b8/0x32b()
[    0.175981] Modules linked in:
[    0.175981] Pid: 1, comm: swapper/0 Not tainted 3.5.0-rc6-bisect-next-20120712-dirty #102
[    0.175981] Call Trace:
[    0.175981]  [<ffffffff81087455>] ? worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810559d1>] warn_slowpath_common+0xae/0xdb
[    0.175981]  [<ffffffff81055a26>] warn_slowpath_null+0x28/0x31
[    0.175981]  [<ffffffff81087455>] worker_enter_idle+0x2b8/0x32b
[    0.175981]  [<ffffffff810874ee>] start_worker+0x26/0x42
[    0.175981]  [<ffffffff81c7dc4d>] init_workqueues+0x370/0x638
[    0.175981]  [<ffffffff81c7d8dd>] ? usermodehelper_init+0x8a/0x8a
[    0.175981]  [<ffffffff81000284>] do_one_initcall+0xce/0x272
[    0.175981]  [<ffffffff81c62652>] kernel_init+0x12e/0x3c1
[    0.175981]  [<ffffffff814b6e74>] kernel_thread_helper+0x4/0x10
[    0.175981]  [<ffffffff814b53b0>] ? retint_restore_args+0x13/0x13
[    0.175981]  [<ffffffff81c62524>] ? start_kernel+0x739/0x739
[    0.175981]  [<ffffffff814b6e70>] ? gs_change+0x13/0x13
[    0.175981] ---[ end trace c22d98677c4d3e37 ]---
[    0.178091] Testing tracer nop: PASSED
[    0.179138] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.181221] SMP alternatives: lockdep: fixing up alternatives
[    0.181995] smpboot: Booting Node   0, Processors  #1 OK
[    0.000999] kvm-clock: cpu 1, msr 0:dd11e01, secondary cpu clock
[    0.000999] masked ExtINT on CPU#1
[    0.000999] Disabled fast string operations
[    0.207203] Brought up 2 CPUs
[    0.207732] smpboot: Total of 2 processors activated (13199.94 BogoMIPS)
[    0.209280] CPU0 attaching sched-domain:
[    0.210007]  domain 0: span 0-1 level CPU
[    0.210710]   groups: 0 (cpu_power = 1023) 1
[    0.211440] CPU1 attaching sched-domain:
[    0.211983]  domain 0: span 0-1 level CPU
[    0.212694]   groups: 1 0 (cpu_power = 1023)
[    0.218232] devtmpfs: initialized
[    0.218877] device: 'platform': device_add
[    0.219027] PM: Adding info for No Bus:platform
[    0.220063] bus: 'platform': registered
[    0.221055] bus: 'cpu': registered
[    0.221683] device: 'cpu': device_add
[    0.222014] PM: Adding info for No Bus:cpu
[    0.223020] bus: 'memory': registered
[    0.223985] device: 'memory': device_add
[    0.224670] PM: Adding info for No Bus:memory
[    0.230912] device: 'memory0': device_add
[    0.231006] bus: 'memory': add device memory0
[    0.232066] PM: Adding info for memory:memory0
[    0.233071] device: 'memory1': device_add
[    0.233986] bus: 'memory': add device memory1
[    0.234765] PM: Adding info for memory:memory1
[    0.248722] atomic64 test passed for x86-64 platform with CX8 and with SSE
[    0.249977] device class 'regulator': registering
[    0.251020] Registering platform device 'reg-dummy'. Parent at platform
[    0.251991] device: 'reg-dummy': device_add
[    0.252985] bus: 'platform': add device reg-dummy
[    0.253848] PM: Adding info for platform:reg-dummy
[    0.260849] bus: 'platform': add driver reg-dummy
[    0.260984] bus: 'platform': driver_probe_device: matched device reg-dummy with driver reg-dummy
[    0.262977] bus: 'platform': really_probe: probing driver reg-dummy with device reg-dummy
[    0.264070] device: 'regulator.0': device_add
[    0.265133] PM: Adding info for No Bus:regulator.0
[    0.266085] dummy: 
[    0.273208] driver: 'reg-dummy': driver_bound: bound to device 'reg-dummy'
[    0.274005] bus: 'platform': really_probe: bound device reg-dummy to driver reg-dummy
[    0.275092] RTC time:  0:34:29, date: 07/13/12
[    0.276994] NET: Registered protocol family 16
[    0.277905] device class 'bdi': registering
[    0.278011] device class 'tty': registering
[    0.279013] bus: 'node': registered
[    0.286795] device: 'node': device_add
[    0.287020] PM: Adding info for No Bus:node
[    0.288127] device class 'dma': registering
[    0.289071] device: 'node0': device_add
[    0.289747] bus: 'node': add device node0
[    0.289994] PM: Adding info for node:node0
[    0.291031] device: 'cpu0': device_add
[    0.291977] bus: 'cpu': add device cpu0
[    0.292677] PM: Adding info for cpu:cpu0
[    0.299186] device: 'cpu1': device_add
[    0.299860] bus: 'cpu': add device cpu1
[    0.299992] PM: Adding info for cpu:cpu1
[    0.301007] mtrr: your CPUs had inconsistent variable MTRR settings
[    0.301969] mtrr: your CPUs had inconsistent MTRRdefType settings
[    0.302968] mtrr: probably your BIOS does not setup all CPUs.
[    0.303968] mtrr: corrected configuration.
[    0.311821] device: 'default': device_add
[    0.312027] PM: Adding info for No Bus:default
[    0.314526] bio: create slab <bio-0> at 0
[    0.315020] device class 'block': registering
[    0.317769] device class 'misc': registering
[    0.318022] bus: 'serio': registered
[    0.318967] device class 'input': registering
[    0.320006] device class 'power_supply': registering
[    0.320994] device class 'leds': registering
[    0.321795] device class 'net': registering
[    0.322030] device: 'lo': device_add
[    0.323147] PM: Adding info for No Bus:lo
[    0.330653] Switching to clocksource kvm-clock
[    0.332373] Warning: could not register all branches stats
[    0.333365] Warning: could not register annotated branches stats
[    0.413675] device class 'mem': registering
[    0.414493] device: 'mem': device_add
[    0.420754] PM: Adding info for No Bus:mem
[    0.421550] device: 'kmem': device_add
[    0.423861] PM: Adding info for No Bus:kmem
[    0.424642] device: 'null': device_add
[    0.426918] PM: Adding info for No Bus:null
[    0.427694] device: 'zero': device_add
[    0.430025] PM: Adding info for No Bus:zero
[    0.430773] device: 'full': device_add
[    0.433074] PM: Adding info for No Bus:full
[    0.433838] device: 'random': device_add
[    0.436151] PM: Adding info for No Bus:random
[    0.436919] device: 'urandom': device_add
[    0.439276] PM: Adding info for No Bus:urandom
[    0.440100] device: 'kmsg': device_add
[    0.442396] PM: Adding info for No Bus:kmsg
[    0.443148] device: 'tty': device_add
[    0.445317] PM: Adding info for No Bus:tty
[    0.446087] device: 'console': device_add
[    0.448386] PM: Adding info for No Bus:console
[    0.449224] NET: Registered protocol family 1
[    0.450284] Unpacking initramfs...
[    1.877893] debug: unmapping init [mem 0xffff88000e8d6000-0xffff88000ffeffff]
[    1.903095] DMA-API: preallocated 32768 debug entries
[    1.903966] DMA-API: debugging enabled by kernel config
[    1.905059] Registering platform device 'rtc_cmos'. Parent at platform
[    1.906178] device: 'rtc_cmos': device_add
[    1.906884] bus: 'platform': add device rtc_cmos
[    1.907727] PM: Adding info for platform:rtc_cmos
[    1.908579] platform rtc_cmos: registered platform RTC device (no PNP device found)
[    1.910170] device: 'snapshot': device_add
[    1.911083] PM: Adding info for No Bus:snapshot
[    1.911949] bus: 'clocksource': registered
[    1.912686] device: 'clocksource': device_add
[    1.913480] PM: Adding info for No Bus:clocksource
[    1.914328] device: 'clocksource0': device_add
[    1.915092] bus: 'clocksource': add device clocksource0
[    1.915985] PM: Adding info for clocksource:clocksource0
[    1.916938] bus: 'platform': add driver alarmtimer
[    1.917799] Registering platform device 'alarmtimer'. Parent at platform
[    1.918948] device: 'alarmtimer': device_add
[    1.919693] bus: 'platform': add device alarmtimer
[    1.920546] PM: Adding info for platform:alarmtimer
[    1.921413] bus: 'platform': driver_probe_device: matched device alarmtimer with driver alarmtimer
[    1.922931] bus: 'platform': really_probe: probing driver alarmtimer with device alarmtimer
[    1.924342] driver: 'alarmtimer': driver_bound: bound to device 'alarmtimer'
[    1.925525] bus: 'platform': really_probe: bound device alarmtimer to driver alarmtimer
[    1.926945] audit: initializing netlink socket (disabled)
[    1.927924] type=2000 audit(1342139670.926:1): initialized
[    1.941097] Testing tracer function: PASSED
[    2.087999] Testing dynamic ftrace: PASSED
[    2.338209] Testing dynamic ftrace ops #1: (1 0 1 1 0) (1 1 2 1 0) (2 1 3 1 940) (2 2 4 1 1027) PASSED
[    2.431997] Testing dynamic ftrace ops #2: (1 0 1 28 0) (1 1 2 297 0) (2 1 3 1 13) (2 2 4 84 96) PASSED
[    2.540363] bus: 'event_source': registered
[    2.541114] device: 'breakpoint': device_add
[    2.541860] bus: 'event_source': add device breakpoint
[    2.542799] PM: Adding info for event_source:breakpoint
[    2.543767] device: 'tracepoint': device_add
[    2.544535] bus: 'event_source': add device tracepoint
[    2.545493] PM: Adding info for event_source:tracepoint
[    2.546442] device: 'software': device_add
[    2.547170] bus: 'event_source': add device software
[    2.548449] PM: Adding info for event_source:software
[    2.549665] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    2.560548] msgmni has been set to 390
[    2.561843] cryptomgr_test (26) used greatest stack depth: 5736 bytes left
[    2.563190] alg: No test for stdrng (krng)
[    2.564112] device class 'bsg': registering
[    2.564859] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 254)
[    2.566155] io scheduler noop registered (default)
[    2.567035] device: 'ptyp0': device_add
[    2.567860] PM: Adding info for No Bus:ptyp0
[    2.568687] device: 'ptyp1': device_add
[    2.569492] PM: Adding info for No Bus:ptyp1
[    2.570277] device: 'ptyp2': device_add
[    2.571095] PM: Adding info for No Bus:ptyp2
[    2.571873] device: 'ptyp3': device_add
[    2.572693] PM: Adding info for No Bus:ptyp3
[    2.573479] device: 'ptyp4': device_add
[    2.574329] PM: Adding info for No Bus:ptyp4
[    2.575100] device: 'ptyp5': device_add
[    2.575944] PM: Adding info for No Bus:ptyp5
[    2.576723] device: 'ptyp6': device_add
[    2.577525] PM: Adding info for No Bus:ptyp6
[    2.578310] device: 'ptyp7': device_add
[    2.579113] PM: Adding info for No Bus:ptyp7
[    2.579872] device: 'ptyp8': device_add
[    2.580685] PM: Adding info for No Bus:ptyp8
[    2.581469] device: 'ptyp9': device_add
[    2.582286] PM: Adding info for No Bus:ptyp9
[    2.583057] device: 'ptypa': device_add
[    2.583831] PM: Adding info for No Bus:ptypa
[    2.584604] device: 'ptypb': device_add
[    2.585418] PM: Adding info for No Bus:ptypb
[    2.586179] device: 'ptypc': device_add
[    2.586966] PM: Adding info for No Bus:ptypc
[    2.587739] device: 'ptypd': device_add
[    2.588551] PM: Adding info for No Bus:ptypd
[    2.589341] device: 'ptype': device_add
[    2.590252] PM: Adding info for No Bus:ptype
[    2.590998] device: 'ptypf': device_add
[    2.591795] PM: Adding info for No Bus:ptypf
[    2.592572] device: 'ptyq0': device_add
[    2.593400] PM: Adding info for No Bus:ptyq0
[    2.594164] device: 'ptyq1': device_add
[    2.594938] PM: Adding info for No Bus:ptyq1
[    2.595710] device: 'ptyq2': device_add
[    2.596515] PM: Adding info for No Bus:ptyq2
[    2.597330] device: 'ptyq3': device_add
[    2.598162] PM: Adding info for No Bus:ptyq3
[    2.598944] device: 'ptyq4': device_add
[    2.599757] PM: Adding info for No Bus:ptyq4
[    2.600545] device: 'ptyq5': device_add
[    2.601363] PM: Adding info for No Bus:ptyq5
[    2.602125] device: 'ptyq6': device_add
[    2.602949] PM: Adding info for No Bus:ptyq6
[    2.603723] device: 'ptyq7': device_add
[    2.604536] PM: Adding info for No Bus:ptyq7
[    2.605313] device: 'ptyq8': device_add
[    2.606115] PM: Adding info for No Bus:ptyq8
[    2.606877] device: 'ptyq9': device_add
[    2.607708] PM: Adding info for No Bus:ptyq9
[    2.608497] device: 'ptyqa': device_add
[    2.609318] PM: Adding info for No Bus:ptyqa
[    2.610081] device: 'ptyqb': device_add
[    2.610861] PM: Adding info for No Bus:ptyqb
[    2.611631] device: 'ptyqc': device_add
[    2.612444] PM: Adding info for No Bus:ptyqc
[    2.613210] device: 'ptyqd': device_add
[    2.613981] PM: Adding info for No Bus:ptyqd
[    2.614764] device: 'ptyqe': device_add
[    2.615591] PM: Adding info for No Bus:ptyqe
[    2.616375] device: 'ptyqf': device_add
[    2.617165] PM: Adding info for No Bus:ptyqf
[    2.617915] device: 'ptyr0': device_add
[    2.618743] PM: Adding info for No Bus:ptyr0
[    2.619519] device: 'ptyr1': device_add
[    2.620410] PM: Adding info for No Bus:ptyr1
[    2.621175] device: 'ptyr2': device_add
[    2.621952] PM: Adding info for No Bus:ptyr2
[    2.622761] device: 'ptyr3': device_add
[    2.623590] PM: Adding info for No Bus:ptyr3
[    2.624382] device: 'ptyr4': device_add
[    2.625194] PM: Adding info for No Bus:ptyr4
[    2.625964] device: 'ptyr5': device_add
[    2.626783] PM: Adding info for No Bus:ptyr5
[    2.627559] device: 'ptyr6': device_add
[    2.628369] PM: Adding info for No Bus:ptyr6
[    2.629133] device: 'ptyr7': device_add
[    2.629969] PM: Adding info for No Bus:ptyr7
[    2.630741] device: 'ptyr8': device_add
[    2.631555] PM: Adding info for No Bus:ptyr8
[    2.632348] device: 'ptyr9': device_add
[    2.633151] PM: Adding info for No Bus:ptyr9
[    2.633911] device: 'ptyra': device_add
[    2.634730] PM: Adding info for No Bus:ptyra
[    2.635504] device: 'ptyrb': device_add
[    2.636326] PM: Adding info for No Bus:ptyrb
[    2.637086] device: 'ptyrc': device_add
[    2.637887] PM: Adding info for No Bus:ptyrc
[    2.638679] device: 'ptyrd': device_add
[    2.639477] PM: Adding info for No Bus:ptyrd
[    2.640237] device: 'ptyre': device_add
[    2.641060] PM: Adding info for No Bus:ptyre
[    2.641829] device: 'ptyrf': device_add
[    2.642658] PM: Adding info for No Bus:ptyrf
[    2.643433] device: 'ptys0': device_add
[    2.644216] PM: Adding info for No Bus:ptys0
[    2.644963] device: 'ptys1': device_add
[    2.645775] PM: Adding info for No Bus:ptys1
[    2.646550] device: 'ptys2': device_add
[    2.647346] PM: Adding info for No Bus:ptys2
[    2.648134] device: 'ptys3': device_add
[    2.648943] PM: Adding info for No Bus:ptys3
[    2.649735] device: 'ptys4': device_add
[    2.650649] PM: Adding info for No Bus:ptys4
[    2.651445] device: 'ptys5': device_add
[    2.652265] PM: Adding info for No Bus:ptys5
[    2.653031] device: 'ptys6': device_add
[    2.653830] PM: Adding info for No Bus:ptys6
[    2.654604] device: 'ptys7': device_add
[    2.655402] PM: Adding info for No Bus:ptys7
[    2.656162] device: 'ptys8': device_add
[    2.656994] PM: Adding info for No Bus:ptys8
[    2.657777] device: 'ptys9': device_add
[    2.658606] PM: Adding info for No Bus:ptys9
[    2.659397] device: 'ptysa': device_add
[    2.660209] PM: Adding info for No Bus:ptysa
[    2.660961] device: 'ptysb': device_add
[    2.661761] PM: Adding info for No Bus:ptysb
[    2.662534] device: 'ptysc': device_add
[    2.663346] PM: Adding info for No Bus:ptysc
[    2.664106] device: 'ptysd': device_add
[    2.664899] PM: Adding info for No Bus:ptysd
[    2.665672] device: 'ptyse': device_add
[    2.666472] PM: Adding info for No Bus:ptyse
[    2.667259] device: 'ptysf': device_add
[    2.668082] PM: Adding info for No Bus:ptysf
[    2.668851] device: 'ptyt0': device_add
[    2.669657] PM: Adding info for No Bus:ptyt0
[    2.670428] device: 'ptyt1': device_add
[    2.671233] PM: Adding info for No Bus:ptyt1
[    2.671982] device: 'ptyt2': device_add
[    2.672780] PM: Adding info for No Bus:ptyt2
[    2.673586] device: 'ptyt3': device_add
[    2.674402] PM: Adding info for No Bus:ptyt3
[    2.675173] device: 'ptyt4': device_add
[    2.675995] PM: Adding info for No Bus:ptyt4
[    2.676805] device: 'ptyt5': device_add
[    2.677725] PM: Adding info for No Bus:ptyt5
[    2.678507] device: 'ptyt6': device_add
[    2.679343] PM: Adding info for No Bus:ptyt6
[    2.680165] device: 'ptyt7': device_add
[    2.680943] PM: Adding info for No Bus:ptyt7
[    2.681712] device: 'ptyt8': device_add
[    2.682524] PM: Adding info for No Bus:ptyt8
[    2.683294] device: 'ptyt9': device_add
[    2.684136] PM: Adding info for No Bus:ptyt9
[    2.684897] device: 'ptyta': device_add
[    2.685729] PM: Adding info for No Bus:ptyta
[    2.686503] device: 'ptytb': device_add
[    2.687320] PM: Adding info for No Bus:ptytb
[    2.688083] device: 'ptytc': device_add
[    2.688876] PM: Adding info for No Bus:ptytc
[    2.689644] device: 'ptytd': device_add
[    2.690456] PM: Adding info for No Bus:ptytd
[    2.691218] device: 'ptyte': device_add
[    2.691995] PM: Adding info for No Bus:ptyte
[    2.692781] device: 'ptytf': device_add
[    2.693607] PM: Adding info for No Bus:ptytf
[    2.694392] device: 'ptyu0': device_add
[    2.695182] PM: Adding info for No Bus:ptyu0
[    2.695933] device: 'ptyu1': device_add
[    2.696746] PM: Adding info for No Bus:ptyu1
[    2.697516] device: 'ptyu2': device_add
[    2.698371] PM: Adding info for No Bus:ptyu2
[    2.699166] device: 'ptyu3': device_add
[    2.699967] PM: Adding info for No Bus:ptyu3
[    2.700743] device: 'ptyu4': device_add
[    2.701587] PM: Adding info for No Bus:ptyu4
[    2.702392] device: 'ptyu5': device_add
[    2.703192] PM: Adding info for No Bus:ptyu5
[    2.703944] device: 'ptyu6': device_add
[    2.704762] PM: Adding info for No Bus:ptyu6
[    2.705538] device: 'ptyu7': device_add
[    2.706334] PM: Adding info for No Bus:ptyu7
[    2.707093] device: 'ptyu8': device_add
[    2.707894] PM: Adding info for No Bus:ptyu8
[    2.708686] device: 'ptyu9': device_add
[    2.709503] PM: Adding info for No Bus:ptyu9
[    2.710368] device: 'ptyua': device_add
[    2.711209] PM: Adding info for No Bus:ptyua
[    2.711966] device: 'ptyub': device_add
[    2.712873] PM: Adding info for No Bus:ptyub
[    2.713652] device: 'ptyuc': device_add
[    2.714448] PM: Adding info for No Bus:ptyuc
[    2.715210] device: 'ptyud': device_add
[    2.716008] PM: Adding info for No Bus:ptyud
[    2.716780] device: 'ptyue': device_add
[    2.717578] PM: Adding info for No Bus:ptyue
[    2.718367] device: 'ptyuf': device_add
[    2.719187] PM: Adding info for No Bus:ptyuf
[    2.719954] device: 'ptyv0': device_add
[    2.720776] PM: Adding info for No Bus:ptyv0
[    2.721552] device: 'ptyv1': device_add
[    2.722418] PM: Adding info for No Bus:ptyv1
[    2.723180] device: 'ptyv2': device_add
[    2.724095] PM: Adding info for No Bus:ptyv2
[    2.724884] device: 'ptyv3': device_add
[    2.725769] PM: Adding info for No Bus:ptyv3
[    2.726544] device: 'ptyv4': device_add
[    2.727500] PM: Adding info for No Bus:ptyv4
[    2.728325] device: 'ptyv5': device_add
[    2.729140] PM: Adding info for No Bus:ptyv5
[    2.729889] device: 'ptyv6': device_add
[    2.730726] PM: Adding info for No Bus:ptyv6
[    2.731504] device: 'ptyv7': device_add
[    2.732445] PM: Adding info for No Bus:ptyv7
[    2.733206] device: 'ptyv8': device_add
[    2.734081] PM: Adding info for No Bus:ptyv8
[    2.734831] device: 'ptyv9': device_add
[    2.735716] PM: Adding info for No Bus:ptyv9
[    2.736502] device: 'ptyva': device_add
[    2.737435] PM: Adding info for No Bus:ptyva
[    2.738212] device: 'ptyvb': device_add
[    2.739086] PM: Adding info for No Bus:ptyvb
[    2.739837] device: 'ptyvc': device_add
[    2.740723] PM: Adding info for No Bus:ptyvc
[    2.741497] device: 'ptyvd': device_add
[    2.742336] PM: Adding info for No Bus:ptyvd
[    2.743093] device: 'ptyve': device_add
[    2.743890] PM: Adding info for No Bus:ptyve
[    2.744667] device: 'ptyvf': device_add
[    2.745476] PM: Adding info for No Bus:ptyvf
[    2.746265] device: 'ptyw0': device_add
[    2.747090] PM: Adding info for No Bus:ptyw0
[    2.747841] device: 'ptyw1': device_add
[    2.748653] PM: Adding info for No Bus:ptyw1
[    2.749428] device: 'ptyw2': device_add
[    2.750230] PM: Adding info for No Bus:ptyw2
[    2.751033] device: 'ptyw3': device_add
[    2.751816] PM: Adding info for No Bus:ptyw3
[    2.752589] device: 'ptyw4': device_add
[    2.753422] PM: Adding info for No Bus:ptyw4
[    2.754213] device: 'ptyw5': device_add
[    2.755052] PM: Adding info for No Bus:ptyw5
[    2.755813] device: 'ptyw6': device_add
[    2.756721] PM: Adding info for No Bus:ptyw6
[    2.757502] device: 'ptyw7': device_add
[    2.758327] PM: Adding info for No Bus:ptyw7
[    2.759087] device: 'ptyw8': device_add
[    2.759882] PM: Adding info for No Bus:ptyw8
[    2.760655] device: 'ptyw9': device_add
[    2.761472] PM: Adding info for No Bus:ptyw9
[    2.762255] device: 'ptywa': device_add
[    2.763062] PM: Adding info for No Bus:ptywa
[    2.763826] device: 'ptywb': device_add
[    2.764645] PM: Adding info for No Bus:ptywb
[    2.765420] device: 'ptywc': device_add
[    2.766272] PM: Adding info for No Bus:ptywc
[    2.767041] device: 'ptywd': device_add
[    2.767827] PM: Adding info for No Bus:ptywd
[    2.768607] device: 'ptywe': device_add
[    2.769421] PM: Adding info for No Bus:ptywe
[    2.770262] device: 'ptywf': device_add
[    2.771064] PM: Adding info for No Bus:ptywf
[    2.771835] device: 'ptyx0': device_add
[    2.772670] PM: Adding info for No Bus:ptyx0
[    2.773444] device: 'ptyx1': device_add
[    2.774231] PM: Adding info for No Bus:ptyx1
[    2.774978] device: 'ptyx2': device_add
[    2.775811] PM: Adding info for No Bus:ptyx2
[    2.776619] device: 'ptyx3': device_add
[    2.777442] PM: Adding info for No Bus:ptyx3
[    2.778202] device: 'ptyx4': device_add
[    2.779048] PM: Adding info for No Bus:ptyx4
[    2.779823] device: 'ptyx5': device_add
[    2.780653] PM: Adding info for No Bus:ptyx5
[    2.781441] device: 'ptyx6': device_add
[    2.782229] PM: Adding info for No Bus:ptyx6
[    2.782979] device: 'ptyx7': device_add
[    2.783883] PM: Adding info for No Bus:ptyx7
[    2.784659] device: 'ptyx8': device_add
[    2.785541] PM: Adding info for No Bus:ptyx8
[    2.786307] device: 'ptyx9': device_add
[    2.787205] PM: Adding info for No Bus:ptyx9
[    2.787955] device: 'ptyxa': device_add
[    2.788797] PM: Adding info for No Bus:ptyxa
[    2.789596] device: 'ptyxb': device_add
[    2.790419] PM: Adding info for No Bus:ptyxb
[    2.791188] device: 'ptyxc': device_add
[    2.792099] PM: Adding info for No Bus:ptyxc
[    2.792849] device: 'ptyxd': device_add
[    2.793809] PM: Adding info for No Bus:ptyxd
[    2.794582] device: 'ptyxe': device_add
[    2.795471] PM: Adding info for No Bus:ptyxe
[    2.796232] device: 'ptyxf': device_add
[    2.797104] PM: Adding info for No Bus:ptyxf
[    2.797869] device: 'ptyy0': device_add
[    2.798705] PM: Adding info for No Bus:ptyy0
[    2.799486] device: 'ptyy1': device_add
[    2.800389] PM: Adding info for No Bus:ptyy1
[    2.801152] device: 'ptyy2': device_add
[    2.801928] PM: Adding info for No Bus:ptyy2
[    2.802729] device: 'ptyy3': device_add
[    2.803547] PM: Adding info for No Bus:ptyy3
[    2.804321] device: 'ptyy4': device_add
[    2.805132] PM: Adding info for No Bus:ptyy4
[    2.805897] device: 'ptyy5': device_add
[    2.806726] PM: Adding info for No Bus:ptyy5
[    2.807510] device: 'ptyy6': device_add
[    2.808326] PM: Adding info for No Bus:ptyy6
[    2.809083] device: 'ptyy7': device_add
[    2.809890] PM: Adding info for No Bus:ptyy7
[    2.810662] device: 'ptyy8': device_add
[    2.811476] PM: Adding info for No Bus:ptyy8
[    2.812251] device: 'ptyy9': device_add
[    2.813044] PM: Adding info for No Bus:ptyy9
[    2.813794] device: 'ptyya': device_add
[    2.814610] PM: Adding info for No Bus:ptyya
[    2.815401] device: 'ptyyb': device_add
[    2.816204] PM: Adding info for No Bus:ptyyb
[    2.816969] device: 'ptyyc': device_add
[    2.817779] PM: Adding info for No Bus:ptyyc
[    2.818568] device: 'ptyyd': device_add
[    2.819372] PM: Adding info for No Bus:ptyyd
[    2.820130] device: 'ptyye': device_add
[    2.820962] PM: Adding info for No Bus:ptyye
[    2.821735] device: 'ptyyf': device_add
[    2.822548] PM: Adding info for No Bus:ptyyf
[    2.823326] device: 'ptyz0': device_add
[    2.824123] PM: Adding info for No Bus:ptyz0
[    2.824886] device: 'ptyz1': device_add
[    2.825710] PM: Adding info for No Bus:ptyz1
[    2.826488] device: 'ptyz2': device_add
[    2.827283] PM: Adding info for No Bus:ptyz2
[    2.828085] device: 'ptyz3': device_add
[    2.828895] PM: Adding info for No Bus:ptyz3
[    2.829672] device: 'ptyz4': device_add
[    2.830567] PM: Adding info for No Bus:ptyz4
[    2.831354] device: 'ptyz5': device_add
[    2.832178] PM: Adding info for No Bus:ptyz5
[    2.832942] device: 'ptyz6': device_add
[    2.833776] PM: Adding info for No Bus:ptyz6
[    2.834553] device: 'ptyz7': device_add
[    2.835352] PM: Adding info for No Bus:ptyz7
[    2.836114] device: 'ptyz8': device_add
[    2.836906] PM: Adding info for No Bus:ptyz8
[    2.837681] device: 'ptyz9': device_add
[    2.838488] PM: Adding info for No Bus:ptyz9
[    2.839264] device: 'ptyza': device_add
[    2.840073] PM: Adding info for No Bus:ptyza
[    2.840831] device: 'ptyzb': device_add
[    2.841642] PM: Adding info for No Bus:ptyzb
[    2.842430] device: 'ptyzc': device_add
[    2.843238] PM: Adding info for No Bus:ptyzc
[    2.843995] device: 'ptyzd': device_add
[    2.844808] PM: Adding info for No Bus:ptyzd
[    2.845584] device: 'ptyze': device_add
[    2.846381] PM: Adding info for No Bus:ptyze
[    2.847141] device: 'ptyzf': device_add
[    2.847975] PM: Adding info for No Bus:ptyzf
[    2.848761] device: 'ptya0': device_add
[    2.849573] PM: Adding info for No Bus:ptya0
[    2.850360] device: 'ptya1': device_add
[    2.851179] PM: Adding info for No Bus:ptya1
[    2.851930] device: 'ptya2': device_add
[    2.852729] PM: Adding info for No Bus:ptya2
[    2.853533] device: 'ptya3': device_add
[    2.854356] PM: Adding info for No Bus:ptya3
[    2.855119] device: 'ptya4': device_add
[    2.855931] PM: Adding info for No Bus:ptya4
[    2.856721] device: 'ptya5': device_add
[    2.857531] PM: Adding info for No Bus:ptya5
[    2.858325] device: 'ptya6': device_add
[    2.859143] PM: Adding info for No Bus:ptya6
[    2.859994] device: 'ptya7': device_add
[    2.860792] PM: Adding info for No Bus:ptya7
[    2.861559] device: 'ptya8': device_add
[    2.862372] PM: Adding info for No Bus:ptya8
[    2.863136] device: 'ptya9': device_add
[    2.863912] PM: Adding info for No Bus:ptya9
[    2.864687] device: 'ptyaa': device_add
[    2.865502] PM: Adding info for No Bus:ptyaa
[    2.866275] device: 'ptyab': device_add
[    2.867093] PM: Adding info for No Bus:ptyab
[    2.867865] device: 'ptyac': device_add
[    2.868697] PM: Adding info for No Bus:ptyac
[    2.869475] device: 'ptyad': device_add
[    2.870294] PM: Adding info for No Bus:ptyad
[    2.871061] device: 'ptyae': device_add
[    2.871837] PM: Adding info for No Bus:ptyae
[    2.872608] device: 'ptyaf': device_add
[    2.873422] PM: Adding info for No Bus:ptyaf
[    2.874185] device: 'ptyb0': device_add
[    2.875023] PM: Adding info for No Bus:ptyb0
[    2.875784] device: 'ptyb1': device_add
[    2.876610] PM: Adding info for No Bus:ptyb1
[    2.877390] device: 'ptyb2': device_add
[    2.878203] PM: Adding info for No Bus:ptyb2
[    2.878995] device: 'ptyb3': device_add
[    2.879798] PM: Adding info for No Bus:ptyb3
[    2.880568] device: 'ptyb4': device_add
[    2.881406] PM: Adding info for No Bus:ptyb4
[    2.882185] device: 'ptyb5': device_add
[    2.882964] PM: Adding info for No Bus:ptyb5
[    2.883733] device: 'ptyb6': device_add
[    2.884551] PM: Adding info for No Bus:ptyb6
[    2.885342] device: 'ptyb7': device_add
[    2.886148] PM: Adding info for No Bus:ptyb7
[    2.886901] device: 'ptyb8': device_add
[    2.887717] PM: Adding info for No Bus:ptyb8
[    2.888503] device: 'ptyb9': device_add
[    2.889346] PM: Adding info for No Bus:ptyb9
[    2.890200] device: 'ptyba': device_add
[    2.890993] PM: Adding info for No Bus:ptyba
[    2.891770] device: 'ptybb': device_add
[    2.892581] PM: Adding info for No Bus:ptybb
[    2.893365] device: 'ptybc': device_add
[    2.894169] PM: Adding info for No Bus:ptybc
[    2.894937] device: 'ptybd': device_add
[    2.895806] PM: Adding info for No Bus:ptybd
[    2.896580] device: 'ptybe': device_add
[    2.897380] PM: Adding info for No Bus:ptybe
[    2.898142] device: 'ptybf': device_add
[    2.898949] PM: Adding info for No Bus:ptybf
[    2.899727] device: 'ptyc0': device_add
[    2.900538] PM: Adding info for No Bus:ptyc0
[    2.901315] device: 'ptyc1': device_add
[    2.902151] PM: Adding info for No Bus:ptyc1
[    2.902915] device: 'ptyc2': device_add
[    2.903746] PM: Adding info for No Bus:ptyc2
[    2.904553] device: 'ptyc3': device_add
[    2.905354] PM: Adding info for No Bus:ptyc3
[    2.906116] device: 'ptyc4': device_add
[    2.906926] PM: Adding info for No Bus:ptyc4
[    2.907714] device: 'ptyc5': device_add
[    2.908626] PM: Adding info for No Bus:ptyc5
[    2.909401] device: 'ptyc6': device_add
[    2.910205] PM: Adding info for No Bus:ptyc6
[    2.910970] device: 'ptyc7': device_add
[    2.911800] PM: Adding info for No Bus:ptyc7
[    2.912588] device: 'ptyc8': device_add
[    2.913391] PM: Adding info for No Bus:ptyc8
[    2.914150] device: 'ptyc9': device_add
[    2.915065] PM: Adding info for No Bus:ptyc9
[    2.915816] device: 'ptyca': device_add
[    2.916703] PM: Adding info for No Bus:ptyca
[    2.917474] device: 'ptycb': device_add
[    2.918415] PM: Adding info for No Bus:ptycb
[    2.919181] device: 'ptycc': device_add
[    2.919988] PM: Adding info for No Bus:ptycc
[    2.920919] device: 'ptycd': device_add
[    2.921787] PM: Adding info for No Bus:ptycd
[    2.922593] device: 'ptyce': device_add
[    2.923485] PM: Adding info for No Bus:ptyce
[    2.924261] device: 'ptycf': device_add
[    2.925108] PM: Adding info for No Bus:ptycf
[    2.925857] device: 'ptyd0': device_add
[    2.926738] PM: Adding info for No Bus:ptyd0
[    2.927515] device: 'ptyd1': device_add
[    2.928387] PM: Adding info for No Bus:ptyd1
[    2.929159] device: 'ptyd2': device_add
[    2.930059] PM: Adding info for No Bus:ptyd2
[    2.930840] device: 'ptyd3': device_add
[    2.931645] PM: Adding info for No Bus:ptyd3
[    2.932417] device: 'ptyd4': device_add
[    2.933239] PM: Adding info for No Bus:ptyd4
[    2.934032] device: 'ptyd5': device_add
[    2.934827] PM: Adding info for No Bus:ptyd5
[    2.935599] device: 'ptyd6': device_add
[    2.936399] PM: Adding info for No Bus:ptyd6
[    2.937173] device: 'ptyd7': device_add
[    2.937978] PM: Adding info for No Bus:ptyd7
[    2.938784] device: 'ptyd8': device_add
[    2.939587] PM: Adding info for No Bus:ptyd8
[    2.940353] device: 'ptyd9': device_add
[    2.941162] PM: Adding info for No Bus:ptyd9
[    2.941916] device: 'ptyda': device_add
[    2.942716] PM: Adding info for No Bus:ptyda
[    2.943486] device: 'ptydb': device_add
[    2.944309] PM: Adding info for No Bus:ptydb
[    2.945071] device: 'ptydc': device_add
[    2.945877] PM: Adding info for No Bus:ptydc
[    2.946667] device: 'ptydd': device_add
[    2.947478] PM: Adding info for No Bus:ptydd
[    2.948236] device: 'ptyde': device_add
[    2.949061] PM: Adding info for No Bus:ptyde
[    2.949882] device: 'ptydf': device_add
[    2.950680] PM: Adding info for No Bus:ptydf
[    2.951456] device: 'ptye0': device_add
[    2.952271] PM: Adding info for No Bus:ptye0
[    2.953040] device: 'ptye1': device_add
[    2.953819] PM: Adding info for No Bus:ptye1
[    2.954603] device: 'ptye2': device_add
[    2.955431] PM: Adding info for No Bus:ptye2
[    2.956233] device: 'ptye3': device_add
[    2.957093] PM: Adding info for No Bus:ptye3
[    2.957843] device: 'ptye4': device_add
[    2.958674] PM: Adding info for No Bus:ptye4
[    2.959462] device: 'ptye5': device_add
[    2.960274] PM: Adding info for No Bus:ptye5
[    2.961038] device: 'ptye6': device_add
[    2.961817] PM: Adding info for No Bus:ptye6
[    2.962592] device: 'ptye7': device_add
[    2.963423] PM: Adding info for No Bus:ptye7
[    2.964207] device: 'ptye8': device_add
[    2.965002] PM: Adding info for No Bus:ptye8
[    2.965799] device: 'ptye9': device_add
[    2.966631] PM: Adding info for No Bus:ptye9
[    2.967429] device: 'ptyea': device_add
[    2.968275] PM: Adding info for No Bus:ptyea
[    2.969064] device: 'ptyeb': device_add
[    2.969856] PM: Adding info for No Bus:ptyeb
[    2.970645] device: 'ptyec': device_add
[    2.971479] PM: Adding info for No Bus:ptyec
[    2.972274] device: 'ptyed': device_add
[    2.973082] PM: Adding info for No Bus:ptyed
[    2.973846] device: 'ptyee': device_add
[    2.974673] PM: Adding info for No Bus:ptyee
[    2.975449] device: 'ptyef': device_add
[    2.976237] PM: Adding info for No Bus:ptyef
[    2.976991] device: 'ttyp0': device_add
[    2.977809] PM: Adding info for No Bus:ttyp0
[    2.978596] device: 'ttyp1': device_add
[    2.979415] PM: Adding info for No Bus:ttyp1
[    2.980256] device: 'ttyp2': device_add
[    2.981058] PM: Adding info for No Bus:ttyp2
[    2.981854] device: 'ttyp3': device_add
[    2.982690] PM: Adding info for No Bus:ttyp3
[    2.983475] device: 'ttyp4': device_add
[    2.984337] PM: Adding info for No Bus:ttyp4
[    2.985111] device: 'ttyp5': device_add
[    2.985911] PM: Adding info for No Bus:ttyp5
[    2.986683] device: 'ttyp6': device_add
[    2.987515] PM: Adding info for No Bus:ttyp6
[    2.988297] device: 'ttyp7': device_add
[    2.989107] PM: Adding info for No Bus:ttyp7
[    2.989863] device: 'ttyp8': device_add
[    2.990689] PM: Adding info for No Bus:ttyp8
[    2.991492] device: 'ttyp9': device_add
[    2.992300] PM: Adding info for No Bus:ttyp9
[    2.993065] device: 'ttypa': device_add
[    2.993862] PM: Adding info for No Bus:ttypa
[    2.994638] device: 'ttypb': device_add
[    2.995438] PM: Adding info for No Bus:ttypb
[    2.996196] device: 'ttypc': device_add
[    2.996987] PM: Adding info for No Bus:ttypc
[    2.997761] device: 'ttypd': device_add
[    2.998577] PM: Adding info for No Bus:ttypd
[    2.999365] device: 'ttype': device_add
[    3.000187] PM: Adding info for No Bus:ttype
[    3.000939] device: 'ttypf': device_add
[    3.001756] PM: Adding info for No Bus:ttypf
[    3.002533] device: 'ttyq0': device_add
[    3.003348] PM: Adding info for No Bus:ttyq0
[    3.004110] device: 'ttyq1': device_add
[    3.004906] PM: Adding info for No Bus:ttyq1
[    3.005686] device: 'ttyq2': device_add
[    3.006481] PM: Adding info for No Bus:ttyq2
[    3.007293] device: 'ttyq3': device_add
[    3.008117] PM: Adding info for No Bus:ttyq3
[    3.008895] device: 'ttyq4': device_add
[    3.009868] PM: Adding info for No Bus:ttyq4
[    3.010751] device: 'ttyq5': device_add
[    3.011611] PM: Adding info for No Bus:ttyq5
[    3.012386] device: 'ttyq6': device_add
[    3.013190] PM: Adding info for No Bus:ttyq6
[    3.013944] device: 'ttyq7': device_add
[    3.014749] PM: Adding info for No Bus:ttyq7
[    3.015524] device: 'ttyq8': device_add
[    3.016364] PM: Adding info for No Bus:ttyq8
[    3.017146] device: 'ttyq9': device_add
[    3.017939] PM: Adding info for No Bus:ttyq9
[    3.018728] device: 'ttyqa': device_add
[    3.019544] PM: Adding info for No Bus:ttyqa
[    3.020317] device: 'ttyqb': device_add
[    3.021110] PM: Adding info for No Bus:ttyqb
[    3.021863] device: 'ttyqc': device_add
[    3.022680] PM: Adding info for No Bus:ttyqc
[    3.023452] device: 'ttyqd': device_add
[    3.024268] PM: Adding info for No Bus:ttyqd
[    3.025054] device: 'ttyqe': device_add
[    3.025844] PM: Adding info for No Bus:ttyqe
[    3.026625] device: 'ttyqf': device_add
[    3.027537] PM: Adding info for No Bus:ttyqf
[    3.028323] device: 'ttyr0': device_add
[    3.029115] PM: Adding info for No Bus:ttyr0
[    3.029863] device: 'ttyr1': device_add
[    3.030676] PM: Adding info for No Bus:ttyr1
[    3.031451] device: 'ttyr2': device_add
[    3.032239] PM: Adding info for No Bus:ttyr2
[    3.033054] device: 'ttyr3': device_add
[    3.033868] PM: Adding info for No Bus:ttyr3
[    3.034656] device: 'ttyr4': device_add
[    3.035497] PM: Adding info for No Bus:ttyr4
[    3.036290] device: 'ttyr5': device_add
[    3.037081] PM: Adding info for No Bus:ttyr5
[    3.037834] device: 'ttyr6': device_add
[    3.038699] PM: Adding info for No Bus:ttyr6
[    3.039523] device: 'ttyr7': device_add
[    3.040362] PM: Adding info for No Bus:ttyr7
[    3.041124] device: 'ttyr8': device_add
[    3.041923] PM: Adding info for No Bus:ttyr8
[    3.042711] device: 'ttyr9': device_add
[    3.043518] PM: Adding info for No Bus:ttyr9
[    3.044291] device: 'ttyra': device_add
[    3.045106] PM: Adding info for No Bus:ttyra
[    3.045856] device: 'ttyrb': device_add
[    3.046671] PM: Adding info for No Bus:ttyrb
[    3.047448] device: 'ttyrc': device_add
[    3.048237] PM: Adding info for No Bus:ttyrc
[    3.048996] device: 'ttyrd': device_add
[    3.049815] PM: Adding info for No Bus:ttyrd
[    3.050593] device: 'ttyre': device_add
[    3.051407] PM: Adding info for No Bus:ttyre
[    3.052178] device: 'ttyrf': device_add
[    3.052983] PM: Adding info for No Bus:ttyrf
[    3.053760] device: 'ttys0': device_add
[    3.054557] PM: Adding info for No Bus:ttys0
[    3.055331] device: 'ttys1': device_add
[    3.056142] PM: Adding info for No Bus:ttys1
[    3.056894] device: 'ttys2': device_add
[    3.057711] PM: Adding info for No Bus:ttys2
[    3.058530] device: 'ttys3': device_add
[    3.059348] PM: Adding info for No Bus:ttys3
[    3.060126] device: 'ttys4': device_add
[    3.060952] PM: Adding info for No Bus:ttys4
[    3.061761] device: 'ttys5': device_add
[    3.062568] PM: Adding info for No Bus:ttys5
[    3.063340] device: 'ttys6': device_add
[    3.064139] PM: Adding info for No Bus:ttys6
[    3.064892] device: 'ttys7': device_add
[    3.065730] PM: Adding info for No Bus:ttys7
[    3.066504] device: 'ttys8': device_add
[    3.067329] PM: Adding info for No Bus:ttys8
[    3.068094] device: 'ttys9': device_add
[    3.068917] PM: Adding info for No Bus:ttys9
[    3.069715] device: 'ttysa': device_add
[    3.070589] PM: Adding info for No Bus:ttysa
[    3.071363] device: 'ttysb': device_add
[    3.072175] PM: Adding info for No Bus:ttysb
[    3.072925] device: 'ttysc': device_add
[    3.073730] PM: Adding info for No Bus:ttysc
[    3.074501] device: 'ttysd': device_add
[    3.075322] PM: Adding info for No Bus:ttysd
[    3.076085] device: 'ttyse': device_add
[    3.076869] PM: Adding info for No Bus:ttyse
[    3.077658] device: 'ttysf': device_add
[    3.078498] PM: Adding info for No Bus:ttysf
[    3.079273] device: 'ttyt0': device_add
[    3.080082] PM: Adding info for No Bus:ttyt0
[    3.080835] device: 'ttyt1': device_add
[    3.081635] PM: Adding info for No Bus:ttyt1
[    3.082408] device: 'ttyt2': device_add
[    3.083229] PM: Adding info for No Bus:ttyt2
[    3.084032] device: 'ttyt3': device_add
[    3.084841] PM: Adding info for No Bus:ttyt3
[    3.085628] device: 'ttyt4': device_add
[    3.086474] PM: Adding info for No Bus:ttyt4
[    3.087274] device: 'ttyt5': device_add
[    3.088077] PM: Adding info for No Bus:ttyt5
[    3.088844] device: 'ttyt6': device_add
[    3.089663] PM: Adding info for No Bus:ttyt6
[    3.090435] device: 'ttyt7': device_add
[    3.091239] PM: Adding info for No Bus:ttyt7
[    3.091992] device: 'ttyt8': device_add
[    3.092834] PM: Adding info for No Bus:ttyt8
[    3.093608] device: 'ttyt9': device_add
[    3.094436] PM: Adding info for No Bus:ttyt9
[    3.095213] device: 'ttyta': device_add
[    3.096035] PM: Adding info for No Bus:ttyta
[    3.096784] device: 'ttytb': device_add
[    3.097604] PM: Adding info for No Bus:ttytb
[    3.098391] device: 'ttytc': device_add
[    3.099178] PM: Adding info for No Bus:ttytc
[    3.099925] device: 'ttytd': device_add
[    3.100815] PM: Adding info for No Bus:ttytd
[    3.101592] device: 'ttyte': device_add
[    3.102408] PM: Adding info for No Bus:ttyte
[    3.103184] device: 'ttytf': device_add
[    3.103981] PM: Adding info for No Bus:ttytf
[    3.104771] device: 'ttyu0': device_add
[    3.105592] PM: Adding info for No Bus:ttyu0
[    3.106370] device: 'ttyu1': device_add
[    3.107163] PM: Adding info for No Bus:ttyu1
[    3.107913] device: 'ttyu2': device_add
[    3.108743] PM: Adding info for No Bus:ttyu2
[    3.109558] device: 'ttyu3': device_add
[    3.110363] PM: Adding info for No Bus:ttyu3
[    3.111125] device: 'ttyu4': device_add
[    3.111951] PM: Adding info for No Bus:ttyu4
[    3.112754] device: 'ttyu5': device_add
[    3.113589] PM: Adding info for No Bus:ttyu5
[    3.114364] device: 'ttyu6': device_add
[    3.115157] PM: Adding info for No Bus:ttyu6
[    3.115905] device: 'ttyu7': device_add
[    3.116726] PM: Adding info for No Bus:ttyu7
[    3.117499] device: 'ttyu8': device_add
[    3.118324] PM: Adding info for No Bus:ttyu8
[    3.119085] device: 'ttyu9': device_add
[    3.119935] PM: Adding info for No Bus:ttyu9
[    3.120725] device: 'ttyua': device_add
[    3.121548] PM: Adding info for No Bus:ttyua
[    3.122324] device: 'ttyub': device_add
[    3.123128] PM: Adding info for No Bus:ttyub
[    3.123878] device: 'ttyuc': device_add
[    3.124693] PM: Adding info for No Bus:ttyuc
[    3.125467] device: 'ttyud': device_add
[    3.126270] PM: Adding info for No Bus:ttyud
[    3.127030] device: 'ttyue': device_add
[    3.127826] PM: Adding info for No Bus:ttyue
[    3.128616] device: 'ttyuf': device_add
[    3.129428] PM: Adding info for No Bus:ttyuf
[    3.130259] device: 'ttyv0': device_add
[    3.131072] PM: Adding info for No Bus:ttyv0
[    3.131833] device: 'ttyv1': device_add
[    3.132640] PM: Adding info for No Bus:ttyv1
[    3.133411] device: 'ttyv2': device_add
[    3.134216] PM: Adding info for No Bus:ttyv2
[    3.134993] device: 'ttyv3': device_add
[    3.135811] PM: Adding info for No Bus:ttyv3
[    3.136586] device: 'ttyv4': device_add
[    3.137405] PM: Adding info for No Bus:ttyv4
[    3.138199] device: 'ttyv5': device_add
[    3.139049] PM: Adding info for No Bus:ttyv5
[    3.139809] device: 'ttyv6': device_add
[    3.140609] PM: Adding info for No Bus:ttyv6
[    3.141380] device: 'ttyv7': device_add
[    3.142184] PM: Adding info for No Bus:ttyv7
[    3.142935] device: 'ttyv8': device_add
[    3.143734] PM: Adding info for No Bus:ttyv8
[    3.144503] device: 'ttyv9': device_add
[    3.145346] PM: Adding info for No Bus:ttyv9
[    3.146113] device: 'ttyva': device_add
[    3.146969] PM: Adding info for No Bus:ttyva
[    3.147764] device: 'ttyvb': device_add
[    3.148581] PM: Adding info for No Bus:ttyvb
[    3.149354] device: 'ttyvc': device_add
[    3.150167] PM: Adding info for No Bus:ttyvc
[    3.150921] device: 'ttyvd': device_add
[    3.151720] PM: Adding info for No Bus:ttyvd
[    3.152487] device: 'ttyve': device_add
[    3.153305] PM: Adding info for No Bus:ttyve
[    3.154068] device: 'ttyvf': device_add
[    3.154853] PM: Adding info for No Bus:ttyvf
[    3.155640] device: 'ttyw0': device_add
[    3.156463] PM: Adding info for No Bus:ttyw0
[    3.157228] device: 'ttyw1': device_add
[    3.158047] PM: Adding info for No Bus:ttyw1
[    3.158811] device: 'ttyw2': device_add
[    3.159612] PM: Adding info for No Bus:ttyw2
[    3.160482] device: 'ttyw3': device_add
[    3.161305] PM: Adding info for No Bus:ttyw3
[    3.162068] device: 'ttyw4': device_add
[    3.162865] PM: Adding info for No Bus:ttyw4
[    3.163662] device: 'ttyw5': device_add
[    3.164492] PM: Adding info for No Bus:ttyw5
[    3.165281] device: 'ttyw6': device_add
[    3.166075] PM: Adding info for No Bus:ttyw6
[    3.166822] device: 'ttyw7': device_add
[    3.167636] PM: Adding info for No Bus:ttyw7
[    3.168423] device: 'ttyw8': device_add
[    3.169224] PM: Adding info for No Bus:ttyw8
[    3.169973] device: 'ttyw9': device_add
[    3.170771] PM: Adding info for No Bus:ttyw9
[    3.171542] device: 'ttywa': device_add
[    3.172363] PM: Adding info for No Bus:ttywa
[    3.173138] device: 'ttywb': device_add
[    3.173969] PM: Adding info for No Bus:ttywb
[    3.174743] device: 'ttywc': device_add
[    3.175560] PM: Adding info for No Bus:ttywc
[    3.176335] device: 'ttywd': device_add
[    3.177122] PM: Adding info for No Bus:ttywd
[    3.177869] device: 'ttywe': device_add
[    3.178728] PM: Adding info for No Bus:ttywe
[    3.179501] device: 'ttywf': device_add
[    3.180324] PM: Adding info for No Bus:ttywf
[    3.181097] device: 'ttyx0': device_add
[    3.181886] PM: Adding info for No Bus:ttyx0
[    3.182669] device: 'ttyx1': device_add
[    3.183486] PM: Adding info for No Bus:ttyx1
[    3.184260] device: 'ttyx2': device_add
[    3.185075] PM: Adding info for No Bus:ttyx2
[    3.185852] device: 'ttyx3': device_add
[    3.186673] PM: Adding info for No Bus:ttyx3
[    3.187448] device: 'ttyx4': device_add
[    3.188278] PM: Adding info for No Bus:ttyx4
[    3.189057] device: 'ttyx5': device_add
[    3.189860] PM: Adding info for No Bus:ttyx5
[    3.190728] device: 'ttyx6': device_add
[    3.191556] PM: Adding info for No Bus:ttyx6
[    3.192328] device: 'ttyx7': device_add
[    3.193118] PM: Adding info for No Bus:ttyx7
[    3.193864] device: 'ttyx8': device_add
[    3.194678] PM: Adding info for No Bus:ttyx8
[    3.195450] device: 'ttyx9': device_add
[    3.196237] PM: Adding info for No Bus:ttyx9
[    3.196990] device: 'ttyxa': device_add
[    3.197806] PM: Adding info for No Bus:ttyxa
[    3.198602] device: 'ttyxb': device_add
[    3.199416] PM: Adding info for No Bus:ttyxb
[    3.200181] device: 'ttyxc': device_add
[    3.201041] PM: Adding info for No Bus:ttyxc
[    3.201799] device: 'ttyxd': device_add
[    3.202616] PM: Adding info for No Bus:ttyxd
[    3.203391] device: 'ttyxe': device_add
[    3.204178] PM: Adding info for No Bus:ttyxe
[    3.204924] device: 'ttyxf': device_add
[    3.205742] PM: Adding info for No Bus:ttyxf
[    3.206517] device: 'ttyy0': device_add
[    3.207337] PM: Adding info for No Bus:ttyy0
[    3.208110] device: 'ttyy1': device_add
[    3.208922] PM: Adding info for No Bus:ttyy1
[    3.209699] device: 'ttyy2': device_add
[    3.210500] PM: Adding info for No Bus:ttyy2
[    3.211301] device: 'ttyy3': device_add
[    3.212108] PM: Adding info for No Bus:ttyy3
[    3.212857] device: 'ttyy4': device_add
[    3.213688] PM: Adding info for No Bus:ttyy4
[    3.214477] device: 'ttyy5': device_add
[    3.215284] PM: Adding info for No Bus:ttyy5
[    3.216061] device: 'ttyy6': device_add
[    3.216874] PM: Adding info for No Bus:ttyy6
[    3.217663] device: 'ttyy7': device_add
[    3.218473] PM: Adding info for No Bus:ttyy7
[    3.219230] device: 'ttyy8': device_add
[    3.220127] PM: Adding info for No Bus:ttyy8
[    3.220874] device: 'ttyy9': device_add
[    3.221673] PM: Adding info for No Bus:ttyy9
[    3.222442] device: 'ttyya': device_add
[    3.223257] PM: Adding info for No Bus:ttyya
[    3.224007] device: 'ttyyb': device_add
[    3.224836] PM: Adding info for No Bus:ttyyb
[    3.225632] device: 'ttyyc': device_add
[    3.226435] PM: Adding info for No Bus:ttyyc
[    3.227195] device: 'ttyyd': device_add
[    3.228054] PM: Adding info for No Bus:ttyyd
[    3.228816] device: 'ttyye': device_add
[    3.229618] PM: Adding info for No Bus:ttyye
[    3.230390] device: 'ttyyf': device_add
[    3.231192] PM: Adding info for No Bus:ttyyf
[    3.231942] device: 'ttyz0': device_add
[    3.232747] PM: Adding info for No Bus:ttyz0
[    3.233533] device: 'ttyz1': device_add
[    3.234356] PM: Adding info for No Bus:ttyz1
[    3.235127] device: 'ttyz2': device_add
[    3.235925] PM: Adding info for No Bus:ttyz2
[    3.236729] device: 'ttyz3': device_add
[    3.237534] PM: Adding info for No Bus:ttyz3
[    3.238318] device: 'ttyz4': device_add
[    3.239141] PM: Adding info for No Bus:ttyz4
[    3.239909] device: 'ttyz5': device_add
[    3.240714] PM: Adding info for No Bus:ttyz5
[    3.241490] device: 'ttyz6': device_add
[    3.242330] PM: Adding info for No Bus:ttyz6
[    3.243105] device: 'ttyz7': device_add
[    3.243887] PM: Adding info for No Bus:ttyz7
[    3.244662] device: 'ttyz8': device_add
[    3.245479] PM: Adding info for No Bus:ttyz8
[    3.246252] device: 'ttyz9': device_add
[    3.247062] PM: Adding info for No Bus:ttyz9
[    3.247833] device: 'ttyza': device_add
[    3.248654] PM: Adding info for No Bus:ttyza
[    3.249430] device: 'ttyzb': device_add
[    3.250350] PM: Adding info for No Bus:ttyzb
[    3.251129] device: 'ttyzc': device_add
[    3.251924] PM: Adding info for No Bus:ttyzc
[    3.252764] device: 'ttyzd': device_add
[    3.253575] PM: Adding info for No Bus:ttyzd
[    3.254348] device: 'ttyze': device_add
[    3.255179] PM: Adding info for No Bus:ttyze
[    3.255925] device: 'ttyzf': device_add
[    3.256742] PM: Adding info for No Bus:ttyzf
[    3.257515] device: 'ttya0': device_add
[    3.258340] PM: Adding info for No Bus:ttya0
[    3.259116] device: 'ttya1': device_add
[    3.259921] PM: Adding info for No Bus:ttya1
[    3.260709] device: 'ttya2': device_add
[    3.261648] PM: Adding info for No Bus:ttya2
[    3.262544] device: 'ttya3': device_add
[    3.263339] PM: Adding info for No Bus:ttya3
[    3.264096] device: 'ttya4': device_add
[    3.264905] PM: Adding info for No Bus:ttya4
[    3.265696] device: 'ttya5': device_add
[    3.266503] PM: Adding info for No Bus:ttya5
[    3.267272] device: 'ttya6': device_add
[    3.268087] PM: Adding info for No Bus:ttya6
[    3.268868] device: 'ttya7': device_add
[    3.269703] PM: Adding info for No Bus:ttya7
[    3.270478] device: 'ttya8': device_add
[    3.271276] PM: Adding info for No Bus:ttya8
[    3.272052] device: 'ttya9': device_add
[    3.272845] PM: Adding info for No Bus:ttya9
[    3.273623] device: 'ttyaa': device_add
[    3.274436] PM: Adding info for No Bus:ttyaa
[    3.275196] device: 'ttyab': device_add
[    3.275993] PM: Adding info for No Bus:ttyab
[    3.276789] device: 'ttyac': device_add
[    3.277602] PM: Adding info for No Bus:ttyac
[    3.278396] device: 'ttyad': device_add
[    3.279199] PM: Adding info for No Bus:ttyad
[    3.280051] device: 'ttyae': device_add
[    3.280842] PM: Adding info for No Bus:ttyae
[    3.281620] device: 'ttyaf': device_add
[    3.282459] PM: Adding info for No Bus:ttyaf
[    3.283221] device: 'ttyb0': device_add
[    3.284050] PM: Adding info for No Bus:ttyb0
[    3.284810] device: 'ttyb1': device_add
[    3.285620] PM: Adding info for No Bus:ttyb1
[    3.286407] device: 'ttyb2': device_add
[    3.287211] PM: Adding info for No Bus:ttyb2
[    3.287992] device: 'ttyb3': device_add
[    3.288811] PM: Adding info for No Bus:ttyb3
[    3.289586] device: 'ttyb4': device_add
[    3.290418] PM: Adding info for No Bus:ttyb4
[    3.291193] device: 'ttyb5': device_add
[    3.291993] PM: Adding info for No Bus:ttyb5
[    3.292768] device: 'ttyb6': device_add
[    3.293573] PM: Adding info for No Bus:ttyb6
[    3.294355] device: 'ttyb7': device_add
[    3.295176] PM: Adding info for No Bus:ttyb7
[    3.295928] device: 'ttyb8': device_add
[    3.296729] PM: Adding info for No Bus:ttyb8
[    3.297501] device: 'ttyb9': device_add
[    3.298313] PM: Adding info for No Bus:ttyb9
[    3.299074] device: 'ttyba': device_add
[    3.299866] PM: Adding info for No Bus:ttyba
[    3.300641] device: 'ttybb': device_add
[    3.301461] PM: Adding info for No Bus:ttybb
[    3.302233] device: 'ttybc': device_add
[    3.303068] PM: Adding info for No Bus:ttybc
[    3.303835] device: 'ttybd': device_add
[    3.304728] PM: Adding info for No Bus:ttybd
[    3.305502] device: 'ttybe': device_add
[    3.306321] PM: Adding info for No Bus:ttybe
[    3.307083] device: 'ttybf': device_add
[    3.307856] PM: Adding info for No Bus:ttybf
[    3.308640] device: 'ttyc0': device_add
[    3.309499] PM: Adding info for No Bus:ttyc0
[    3.310355] device: 'ttyc1': device_add
[    3.311156] PM: Adding info for No Bus:ttyc1
[    3.311918] device: 'ttyc2': device_add
[    3.312748] PM: Adding info for No Bus:ttyc2
[    3.313556] device: 'ttyc3': device_add
[    3.314375] PM: Adding info for No Bus:ttyc3
[    3.315139] device: 'ttyc4': device_add
[    3.315940] PM: Adding info for No Bus:ttyc4
[    3.316729] device: 'ttyc5': device_add
[    3.317548] PM: Adding info for No Bus:ttyc5
[    3.318336] device: 'ttyc6': device_add
[    3.319122] PM: Adding info for No Bus:ttyc6
[    3.319886] device: 'ttyc7': device_add
[    3.320716] PM: Adding info for No Bus:ttyc7
[    3.321504] device: 'ttyc8': device_add
[    3.322457] PM: Adding info for No Bus:ttyc8
[    3.323219] device: 'ttyc9': device_add
[    3.324042] PM: Adding info for No Bus:ttyc9
[    3.324789] device: 'ttyca': device_add
[    3.325603] PM: Adding info for No Bus:ttyca
[    3.326379] device: 'ttycb': device_add
[    3.327170] PM: Adding info for No Bus:ttycb
[    3.327920] device: 'ttycc': device_add
[    3.328761] PM: Adding info for No Bus:ttycc
[    3.329549] device: 'ttycd': device_add
[    3.330363] PM: Adding info for No Bus:ttycd
[    3.331123] device: 'ttyce': device_add
[    3.331922] PM: Adding info for No Bus:ttyce
[    3.332702] device: 'ttycf': device_add
[    3.333503] PM: Adding info for No Bus:ttycf
[    3.334271] device: 'ttyd0': device_add
[    3.335081] PM: Adding info for No Bus:ttyd0
[    3.335829] device: 'ttyd1': device_add
[    3.336684] PM: Adding info for No Bus:ttyd1
[    3.337470] device: 'ttyd2': device_add
[    3.338300] PM: Adding info for No Bus:ttyd2
[    3.339105] device: 'ttyd3': device_add
[    3.339995] PM: Adding info for No Bus:ttyd3
[    3.340855] device: 'ttyd4': device_add
[    3.341674] PM: Adding info for No Bus:ttyd4
[    3.342472] device: 'ttyd5': device_add
[    3.343288] PM: Adding info for No Bus:ttyd5
[    3.344059] device: 'ttyd6': device_add
[    3.344840] PM: Adding info for No Bus:ttyd6
[    3.345615] device: 'ttyd7': device_add
[    3.346447] PM: Adding info for No Bus:ttyd7
[    3.347222] device: 'ttyd8': device_add
[    3.348047] PM: Adding info for No Bus:ttyd8
[    3.348813] device: 'ttyd9': device_add
[    3.349614] PM: Adding info for No Bus:ttyd9
[    3.350387] device: 'ttyda': device_add
[    3.351193] PM: Adding info for No Bus:ttyda
[    3.351943] device: 'ttydb': device_add
[    3.352741] PM: Adding info for No Bus:ttydb
[    3.353513] device: 'ttydc': device_add
[    3.354341] PM: Adding info for No Bus:ttydc
[    3.355115] device: 'ttydd': device_add
[    3.355904] PM: Adding info for No Bus:ttydd
[    3.356689] device: 'ttyde': device_add
[    3.357507] PM: Adding info for No Bus:ttyde
[    3.358294] device: 'ttydf': device_add
[    3.359094] PM: Adding info for No Bus:ttydf
[    3.359847] device: 'ttye0': device_add
[    3.360646] PM: Adding info for No Bus:ttye0
[    3.361419] device: 'ttye1': device_add
[    3.362223] PM: Adding info for No Bus:ttye1
[    3.362981] device: 'ttye2': device_add
[    3.363838] PM: Adding info for No Bus:ttye2
[    3.364654] device: 'ttye3': device_add
[    3.365489] PM: Adding info for No Bus:ttye3
[    3.366263] device: 'ttye4': device_add
[    3.367075] PM: Adding info for No Bus:ttye4
[    3.367841] device: 'ttye5': device_add
[    3.368679] PM: Adding info for No Bus:ttye5
[    3.369456] device: 'ttye6': device_add
[    3.370333] PM: Adding info for No Bus:ttye6
[    3.371096] device: 'ttye7': device_add
[    3.371877] PM: Adding info for No Bus:ttye7
[    3.372661] device: 'ttye8': device_add
[    3.373488] PM: Adding info for No Bus:ttye8
[    3.374270] device: 'ttye9': device_add
[    3.375069] PM: Adding info for No Bus:ttye9
[    3.375820] device: 'ttyea': device_add
[    3.376636] PM: Adding info for No Bus:ttyea
[    3.377409] device: 'ttyeb': device_add
[    3.378198] PM: Adding info for No Bus:ttyeb
[    3.378959] device: 'ttyec': device_add
[    3.379776] PM: Adding info for No Bus:ttyec
[    3.380548] device: 'ttyed': device_add
[    3.381374] PM: Adding info for No Bus:ttyed
[    3.382157] device: 'ttyee': device_add
[    3.382944] PM: Adding info for No Bus:ttyee
[    3.383718] device: 'ttyef': device_add
[    3.384533] PM: Adding info for No Bus:ttyef
[    3.385305] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    3.386419] Registering platform device 'serial8250'. Parent at platform
[    3.387574] device: 'serial8250': device_add
[    3.388345] bus: 'platform': add device serial8250
[    3.389203] PM: Adding info for platform:serial8250
[    3.414529] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
[    3.415600] device: 'ttyS0': device_add
[    3.416501] PM: Adding info for No Bus:ttyS0
[    3.417378] device: 'ttyS1': device_add
[    3.418315] PM: Adding info for No Bus:ttyS1
[    3.419133] device: 'ttyS2': device_add
[    3.419926] PM: Adding info for No Bus:ttyS2
[    3.420806] device: 'ttyS3': device_add
[    3.421680] PM: Adding info for No Bus:ttyS3
[    3.422475] bus: 'platform': add driver serial8250
[    3.423329] bus: 'platform': driver_probe_device: matched device serial8250 with driver serial8250
[    3.424864] bus: 'platform': really_probe: probing driver serial8250 with device serial8250
[    3.426333] driver: 'serial8250': driver_bound: bound to device 'serial8250'
[    3.427551] bus: 'platform': really_probe: bound device serial8250 to driver serial8250
[    3.428975] device: 'ttyprintk': device_add
[    3.429964] PM: Adding info for No Bus:ttyprintk
[    3.430795] bus: 'platform': add driver tpm_tis
[    3.431616] Registering platform device 'tpm_tis'. Parent at platform
[    3.432728] device: 'tpm_tis': device_add
[    3.433455] bus: 'platform': add device tpm_tis
[    3.434306] PM: Adding info for platform:tpm_tis
[    3.435137] bus: 'platform': driver_probe_device: matched device tpm_tis with driver tpm_tis
[    3.436587] bus: 'platform': really_probe: probing driver tpm_tis with device tpm_tis
[    3.437930] driver: 'tpm_tis': driver_bound: bound to device 'tpm_tis'
[    3.439067] bus: 'platform': really_probe: bound device tpm_tis to driver tpm_tis
[    3.440362] device: 'tpm0': device_add
[    3.441156] PM: Adding info for No Bus:tpm0
[    4.195061] device: 'tpm0': device_unregister
[    4.195834] PM: Removing info for No Bus:tpm0
[    4.197167] device: 'tpm0': device_create_release
[    4.198234] PM: Removing info for platform:tpm_tis
[    4.199181] bus: 'platform': remove device tpm_tis
[    4.200169] bus: 'platform': remove driver tpm_tis
[    4.201056] driver: 'tpm_tis': driver_release
[    4.201866] Registering platform device 'i8042'. Parent at platform
[    4.202958] device: 'i8042': device_add
[    4.203650] bus: 'platform': add device i8042
[    4.204445] PM: Adding info for platform:i8042
[    4.205233] bus: 'platform': add driver i8042
[    4.205989] bus: 'platform': driver_probe_device: matched device i8042 with driver i8042
[    4.207383] bus: 'platform': really_probe: probing driver i8042 with device i8042
[    4.209696] serio: i8042 KBD port at 0x60,0x64 irq 1
[    4.210710] serio: i8042 AUX port at 0x60,0x64 irq 12
[    4.211694] device: 'serio0': device_add
[    4.212434] bus: 'serio': add device serio0
[    4.213211] PM: Adding info for serio:serio0
[    4.214044] driver: 'i8042': driver_bound: bound to device 'i8042'
[    4.215125] device: 'serio1': device_add
[    4.215818] bus: 'serio': add device serio1
[    4.216637] PM: Adding info for serio:serio1
[    4.217436] bus: 'platform': really_probe: bound device i8042 to driver i8042
[    4.218699] bus: 'serio': add driver atkbd
[    4.219484] cpuidle: using governor ladder
[    4.220333] 
[    4.220333] printing PIC contents
[    4.221167] ... PIC  IMR: fffb
[    4.221702] ... PIC  IRR: 1013
[    4.222263] ... PIC  ISR: 0000
[    4.222790] ... PIC ELCR: 0c00
[    4.223345] printing local APIC contents on CPU#0/0:
[    4.224185] ... APIC ID:      00000000 (0)
[    4.224329] ... APIC VERSION: 00050014
[    4.224329] ... APIC TASKPRI: 00000000 (00)
[    4.224329] ... APIC PROCPRI: 00000000
[    4.224329] ... APIC LDR: 01000000
[    4.224329] ... APIC DFR: ffffffff
[    4.224329] ... APIC SPIV: 000001ff
[    4.224329] ... APIC ISR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC TMR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000000000000
[    4.224329] ... APIC IRR field:
[    4.224329] 0000000000000000000000000000000000000000000000000000000020008000
[    4.224329] ... APIC ESR: 00000000
[    4.224329] ... APIC ICR: 00000841
[    4.224329] ... APIC ICR2: 01000000
[    4.224329] ... APIC LVTT: 000000ef
[    4.224329] ... APIC LVTPC: 00010000
[    4.224329] ... APIC LVT0: 00010700
[    4.224329] ... APIC LVT1: 00000400
[    4.224329] ... APIC LVTERR: 000000fe
[    4.224329] ... APIC TMICT: 0000a2d2
[    4.224329] ... APIC TMCCT: 00000000
[    4.224329] ... APIC TDCR: 00000003
[    4.224329] 
[    4.241632] number of MP IRQ sources: 20.
[    4.242350] number of IO-APIC #2 registers: 24.
[    4.243145] testing the IO APIC.......................
[    4.244064] IO APIC #2......
[    4.244565] .... register #00: 00000000
[    4.245234] .......    : physical APIC id: 00
[    4.245976] .......    : Delivery Type: 0
[    4.246686] .......    : LTS          : 0
[    4.247398] .... register #01: 00170011
[    4.248068] .......     : max redirection entries: 17
[    4.248936] .......     : PRQ implemented: 0
[    4.249687] .......     : IO APIC version: 11
[    4.250456] .... register #02: 00000000
[    4.251139] .......     : arbitration: 00
[    4.251841] .... IRQ redirection table:
[    4.252600]  NR Dst Mask Trig IRR Pol Stat Dmod Deli Vect:
[    4.253557]  00 00  1    0    0   0   0    0    0    00
[    4.254494]  01 03  0    0    0   0   0    1    1    41
[    4.255445]  02 03  0    0    0   0   0    1    1    51
[    4.256380]  03 01  0    0    0   0   0    1    1    61
[    4.257321]  04 01  1    0    0   0   0    1    1    71
[    4.258269]  05 01  0    0    0   0   0    1    1    81
[    4.259201]  06 01  0    0    0   0   0    1    1    91
[    4.260156]  07 01  0    0    0   0   0    1    1    A1
[    4.261116]  08 01  0    0    0   0   0    1    1    B1
[    4.262064]  09 03  1    1    0   0   0    1    1    C1
[    4.263075]  0a 03  1    1    0   0   0    1    1    D1
[    4.263993]  0b 03  1    1    0   0   0    1    1    E1
[    4.264927]  0c 03  0    0    0   0   0    1    1    22
[    4.265864]  0d 01  0    0    0   0   0    1    1    42
[    4.266798]  0e 01  0    0    0   0   0    1    1    52
[    4.267738]  0f 01  0    0    0   0   0    1    1    62
[    4.268703]  10 00  1    0    0   0   0    0    0    00
[    4.269711]  11 00  1    0    0   0   0    0    0    00
[    4.270679]  12 00  1    0    0   0   0    0    0    00
[    4.271623]  13 00  1    0    0   0   0    0    0    00
[    4.272560]  14 00  1    0    0   0   0    0    0    00
[    4.273498]  15 00  1    0    0   0   0    0    0    00
[    4.274437]  16 00  1    0    0   0   0    0    0    00
[    4.275374]  17 00  1    0    0   0   0    0    0    00
[    4.276302] IRQ to pin mappings:
[    4.276861] IRQ0 -> 0:2
[    4.277369] IRQ1 -> 0:1
[    4.277846] IRQ3 -> 0:3
[    4.278368] IRQ4 -> 0:4
[    4.278840] IRQ5 -> 0:5
[    4.279341] IRQ6 -> 0:6
[    4.279806] IRQ7 -> 0:7
[    4.280307] IRQ8 -> 0:8
[    4.280772] IRQ12 -> 0:12
[    4.281299] IRQ13 -> 0:13
[    4.281794] IRQ14 -> 0:14
[    4.282323] IRQ15 -> 0:15
[    4.282818] IRQ33 -> 0:9
[    4.283330] IRQ34 -> 0:10
[    4.283821] IRQ35 -> 0:11
[    4.284346] .................................... done.
[    4.285272] bus: 'serio': driver_probe_device: matched device serio0 with driver atkbd
[    4.285342] device: 'cpu_dma_latency': device_add
[    4.285428] PM: Adding info for No Bus:cpu_dma_latency
[    4.285464] device: 'network_latency': device_add
[    4.285544] PM: Adding info for No Bus:network_latency
[    4.285575] device: 'network_throughput': device_add
[    4.285639] PM: Adding info for No Bus:network_throughput
[    4.285682] PM: Hibernation image not present or could not be loaded.
[    4.285721] registered taskstats version 1
[    4.285723] Running tests on trace events:
[    4.285725] Testing event kfree_skb: [    4.294208] bus: 'serio': really_probe: probing driver atkbd with device serio0
[    4.297195] device: 'input0': device_add
[    4.298042] PM: Adding info for No Bus:input0
[    4.298925] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
[    4.300637] driver: 'serio0': driver_bound: bound to device 'atkbd'
[    4.300706] Testing event consume_skb: OK
[    4.302376] bus: 'serio': really_probe: bound device serio0 to driver atkbd
[    4.303686] bus: 'serio': driver_probe_device: matched device serio1 with driver atkbd
[    4.305078] bus: 'serio': really_probe: probing driver atkbd with device serio1
[    4.306670] atkbd: probe of serio1 rejects match -19
[    4.308159] OK
[    4.308516] Testing event skb_copy_datagram_iovec: OK
[    4.313332] Testing event net_dev_xmit: OK
[    4.318324] Testing event net_dev_queue: OK
[    4.323321] Testing event netif_receive_skb: OK
[    4.328338] Testing event netif_rx: OK
[    4.333307] Testing event napi_poll: OK
[    4.338312] Testing event sock_rcvqueue_full: OK
[    4.343327] Testing event sock_exceed_buf_limit: OK
[    4.348306] Testing event udp_fail_queue_rcv_skb: OK
[    4.353292] Testing event regmap_reg_write: OK
[    4.358307] Testing event regmap_reg_read: OK
[    4.363288] Testing event regmap_reg_read_cache: OK
[    4.368310] Testing event regmap_hw_read_start: OK
[    4.373288] Testing event regmap_hw_read_done: OK
[    4.378311] Testing event regmap_hw_write_start: OK
[    4.383292] Testing event regmap_hw_write_done: OK
[    4.388300] Testing event regcache_sync: OK
[    4.393289] Testing event regmap_cache_only: OK
[    4.398338] Testing event regmap_cache_bypass: OK
[    4.403288] Testing event mix_pool_bytes: OK
[    4.408307] Testing event mix_pool_bytes_nolock: OK
[    4.413289] Testing event credit_entropy_bits: OK
[    4.418306] Testing event get_random_bytes: OK
[    4.423309] Testing event extract_entropy: OK
[    4.428309] Testing event extract_entropy_user: OK
[    4.433289] Testing event regulator_enable: OK
[    4.438303] Testing event regulator_enable_delay: OK
[    4.443323] Testing event regulator_enable_complete: OK
[    4.448298] Testing event regulator_disable: OK
[    4.453291] Testing event regulator_disable_complete: OK
[    4.458307] Testing event regulator_set_voltage: OK
[    4.463288] Testing event regulator_set_voltage_complete: OK
[    4.468304] Testing event gpio_direction: OK
[    4.473295] Testing event gpio_value: OK
[    4.478304] Testing event block_rq_abort: OK
[    4.483238] Testing event block_rq_requeue: OK
[    4.488339] Testing event block_rq_complete: OK
[    4.493294] Testing event block_rq_insert: OK
[    4.498307] Testing event block_rq_issue: OK
[    4.503303] Testing event block_bio_bounce: OK
[    4.508298] Testing event block_bio_complete: OK
[    4.513292] Testing event block_bio_backmerge: OK
[    4.518301] Testing event block_bio_frontmerge: OK
[    4.523291] Testing event block_bio_queue: OK
[    4.528303] Testing event block_getrq: OK
[    4.533329] Testing event block_sleeprq: OK
[    4.538301] Testing event block_plug: OK
[    4.543289] Testing event block_unplug: OK
[    4.548309] Testing event block_split: OK
[    4.553295] Testing event block_bio_remap: OK
[    4.558301] Testing event block_rq_remap: OK
[    4.563298] Testing event writeback_nothread: OK
[    4.568301] Testing event writeback_queue: OK
[    4.573290] Testing event writeback_exec: OK
[    4.578330] Testing event writeback_start: OK
[    4.583291] Testing event writeback_written: OK
[    4.588303] Testing event writeback_wait: OK
[    4.593292] Testing event writeback_pages_written: OK
[    4.598237] Testing event writeback_nowork: OK
[    4.603288] Testing event writeback_wake_background: OK
[    4.608306] Testing event writeback_wake_thread: OK
[    4.613301] Testing event writeback_wake_forker_thread: OK
[    4.618306] Testing event writeback_bdi_register: OK
[    4.623290] Testing event writeback_bdi_unregister: OK
[    4.628303] Testing event writeback_thread_start: OK
[    4.633329] Testing event writeback_thread_stop: OK
[    4.638304] Testing event wbc_writepage: OK
[    4.643290] Testing event writeback_queue_io: OK
[    4.648306] Testing event global_dirty_state: OK
[    4.653237] Testing event bdi_dirty_ratelimit: OK
[    4.658295] Testing event balance_dirty_pages: OK
[    4.663256] Testing event writeback_sb_inodes_requeue: OK
[    4.668272] Testing event writeback_congestion_wait: OK
[    4.673256] Testing event writeback_wait_iff_congested: OK
[    4.678306] Testing event writeback_single_inode: OK
[    4.683271] Testing event mm_compaction_isolate_migratepages: OK
[    4.688266] Testing event mm_compaction_isolate_freepages: OK
[    4.693258] Testing event mm_compaction_migratepages: OK
[    4.698274] Testing event kmalloc: OK
[    4.703263] Testing event kmem_cache_alloc: OK
[    4.708277] Testing event kmalloc_node: OK
[    4.713254] Testing event kmem_cache_alloc_node: OK
[    4.718264] Testing event kfree: OK
[    4.722270] Testing event kmem_cache_free: OK
[    4.727261] Testing event mm_page_free: OK
[    4.732307] Testing event mm_page_free_batched: OK
[    4.737256] Testing event mm_page_alloc: OK
[    4.742272] Testing event mm_page_alloc_zone_locked: OK
[    4.747257] Testing event mm_page_pcpu_drain: OK
[    4.752254] Testing event mm_page_alloc_extfrag: OK
[    4.757256] Testing event mm_vmscan_kswapd_sleep: OK
[    4.762257] Testing event mm_vmscan_kswapd_wake: OK
[    4.767263] Testing event mm_vmscan_wakeup_kswapd: OK
[    4.772256] Testing event mm_vmscan_direct_reclaim_begin: OK
[    4.777293] Testing event mm_vmscan_memcg_reclaim_begin: OK
[    4.782256] Testing event mm_vmscan_memcg_softlimit_reclaim_begin: OK
[    4.787260] Testing event mm_vmscan_direct_reclaim_end: OK
[    4.792254] Testing event mm_vmscan_memcg_reclaim_end: OK
[    4.797258] Testing event mm_vmscan_memcg_softlimit_reclaim_end: OK
[    4.802258] Testing event mm_shrink_slab_start: OK
[    4.807254] Testing event mm_shrink_slab_end: OK
[    4.812267] Testing event mm_vmscan_lru_isolate: OK
[    4.817256] Testing event mm_vmscan_memcg_isolate: OK
[    4.822294] Testing event mm_vmscan_writepage: OK
[    4.827257] Testing event mm_vmscan_lru_shrink_inactive: OK
[    4.832255] Testing event oom_score_adj_update: OK
[    4.837272] Testing event rpm_suspend: OK
[    4.842266] Testing event rpm_resume: OK
[    4.847254] Testing event rpm_idle: OK
[    4.852275] Testing event rpm_return_int: OK
[    4.857259] Testing event cpu_idle: OK
[    4.862274] Testing event cpu_frequency: OK
[    4.867257] Testing event machine_suspend: OK
[    4.872277] Testing event wakeup_source_activate: OK
[    4.877254] Testing event wakeup_source_deactivate: OK
[    4.882258] Testing event clock_enable: OK
[    4.887259] Testing event clock_disable: OK
[    4.892270] Testing event clock_set_rate: OK
[    4.897258] Testing event power_domain_target: OK
[    4.902257] Testing event ftrace_test_filter: OK
[    4.907300] Testing event module_load: OK
[    4.912275] Testing event module_free: OK
[    4.917497] Testing event module_request: OK
[    4.923568] Testing event lock_acquire: OK
[    4.928486] Testing event lock_release: OK
[    4.933310] Testing event sched_kthread_stop: OK
[    4.938267] Testing event sched_kthread_stop_ret: OK
[    4.943258] Testing event sched_wakeup: OK
[    4.948373] Testing event sched_wakeup_new: OK
[    4.953258] Testing event sched_switch: OK
[    4.958273] Testing event sched_migrate_task: OK
[    4.963253] Testing event sched_process_free: OK
[    4.968266] Testing event sched_process_exit: OK
[    4.973265] Testing event sched_wait_task: OK
[    4.978267] Testing event sched_process_wait: OK
[    4.983255] Testing event sched_process_fork: OK
[    4.988270] Testing event sched_process_exec: OK
[    4.993294] Testing event sched_stat_wait: OK
[    4.998277] Testing event sched_stat_sleep: OK
[    5.003261] Testing event sched_stat_iowait: OK
[    5.008266] Testing event sched_stat_blocked: OK
[    5.013261] Testing event sched_stat_runtime: OK
[    5.018276] Testing event sched_pi_setprio: OK
[    5.023255] Testing event rcu_utilization: OK
[    5.028279] Testing event rcu_grace_period: OK
[    5.033260] Testing event rcu_grace_period_init: OK
[    5.038307] Testing event rcu_preempt_task: OK
[    5.043266] Testing event rcu_unlock_preempted_task: OK
[    5.048267] Testing event rcu_quiescent_state_report: OK
[    5.053265] Testing event rcu_fqs: OK
[    5.058272] Testing event rcu_dyntick: OK
[    5.063270] Testing event rcu_prep_idle: OK
[    5.068278] Testing event rcu_callback: OK
[    5.073260] Testing event rcu_kfree_callback: OK
[    5.078266] Testing event rcu_batch_start: OK
[    5.083292] Testing event rcu_invoke_callback: OK
[    5.088274] Testing event rcu_invoke_kfree_callback: OK
[    5.093259] Testing event rcu_batch_end: OK
[    5.098278] Testing event rcu_torture_read: OK
[    5.103267] Testing event rcu_barrier: OK
[    5.108276] Testing event workqueue_queue_work: OK
[    5.113252] Testing event workqueue_activate_work: OK
[    5.118272] Testing event workqueue_execute_start: OK
[    5.123257] Testing event workqueue_execute_end: OK
[    5.128281] Testing event signal_generate: OK
[    5.133256] Testing event signal_deliver: OK
[    5.138276] Testing event timer_init: OK
[    5.143260] Testing event timer_start: OK
[    5.148265] Testing event timer_expire_entry: OK
[    5.153257] Testing event timer_expire_exit: OK
[    5.158277] Testing event timer_cancel: OK
[    5.163264] Testing event hrtimer_init: OK
[    5.168277] Testing event hrtimer_start: OK
[    5.173309] Testing event hrtimer_expire_entry: OK
[    5.178271] Testing event hrtimer_expire_exit: OK
[    5.183301] Testing event hrtimer_cancel: OK
[    5.188284] Testing event itimer_state: OK
[    5.193257] Testing event itimer_expire: OK
[    5.198282] Testing event irq_handler_entry: OK
[    5.203257] Testing event irq_handler_exit: OK
[    5.208266] Testing event softirq_entry: OK
[    5.213257] Testing event softirq_exit: OK
[    5.218277] Testing event softirq_raise: OK
[    5.223259] Testing event console: OK
[    5.228311] Testing event task_newtask: OK
[    5.233255] Testing event task_rename: OK
[    5.238268] Testing event sys_enter: OK
[    5.243258] Testing event sys_exit: OK
[    5.248272] Testing event emulate_vsyscall: OK
[    5.253276] Running tests on trace event systems:
[    5.254168] Testing event system skb: OK
[    5.259478] Testing event system net: OK
[    5.264351] Testing event system napi: OK
[    5.269296] Testing event system sock: OK
[    5.274296] Testing event system udp: OK
[    5.279472] Testing event system regmap: OK
[    5.284388] Testing event system random: OK
[    5.289346] Testing event system regulator: OK
[    5.294352] Testing event system gpio: OK
[    5.299290] Testing event system block: OK
[    5.304487] Testing event system writeback: OK
[    5.309652] Testing event system compaction: 

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-12 23:46                     ` Tony Luck
  (?)
@ 2012-07-13 17:51                       ` Tony Luck
  -1 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-13 17:51 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

On Thu, Jul 12, 2012 at 4:46 PM, Tony Luck <tony.luck@gmail.com> wrote:
> Still hasn't come back in three reboots.  I have to leave now, can continue
> tomorrow.

Tired of rebooting ... seems that it is very hard to hit this with
this patch :-(

-Tony

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-13 17:51                       ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-13 17:51 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Fengguang Wu, linux-kernel, torvalds, joshhunt00, axboe, rni,
	vgoyal, vwadekar, herbert, davem, linux-crypto, swhiteho, bpm,
	elder, xfs, marcel, gustavo, johan.hedberg, linux-bluetooth,
	martin.petersen

On Thu, Jul 12, 2012 at 4:46 PM, Tony Luck <tony.luck@gmail.com> wrote:
> Still hasn't come back in three reboots.  I have to leave now, can continue
> tomorrow.

Tired of rebooting ... seems that it is very hard to hit this with
this patch :-(

-Tony

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-13 17:51                       ` Tony Luck
  0 siblings, 0 replies; 96+ messages in thread
From: Tony Luck @ 2012-07-13 17:51 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, xfs, elder, rni, martin.petersen, linux-bluetooth,
	torvalds, marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, gustavo, Fengguang Wu, joshhunt00, davem, vgoyal,
	johan.hedberg

On Thu, Jul 12, 2012 at 4:46 PM, Tony Luck <tony.luck@gmail.com> wrote:
> Still hasn't come back in three reboots.  I have to leave now, can continue
> tomorrow.

Tired of rebooting ... seems that it is very hard to hit this with
this patch :-(

-Tony

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-13  2:08             ` Fengguang Wu
  (?)
@ 2012-07-14  3:41               ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:41 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

Hello,

On Fri, Jul 13, 2012 at 10:08:00AM +0800, Fengguang Wu wrote:
> [    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
> [    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
> [    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
> [    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
> [    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
> [    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
> [    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
> [    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
> [    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438

Heh, I found it.  get_pool_nr_running() stores the nr_running array to
use in a local pointer to array and then returns pointer to the
specific element from there depending on the priority.

	atomic_t (*nr_running)[NR_WORKER_POOLS];

	/* set @nr_running to the array to use */
	return nr_running[worker_pool_pri(pool)];

The [] operator in the return statement is indexing to the arrays
instead of the array elements, so if the index is 1, the above
statement offsets nr_running by sizeof(atomic_t [NR_WORKER_POOLS])
instead of sizeof(atomic_t).  This should have been
&(*nr_running)[worker_pool_pri(pool)] instead.

So, highpri ends up dereferencing out-of-bounds and depending on
variable layout, it may see garbage value from the beginning (what you
were seeing) or get interfered afterwards (what Tony was seeing).
This also explains why I didn't see it and Tony can no longer
reproduce it after debug patch.

Will post updated patches.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  3:41               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:41 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

Hello,

On Fri, Jul 13, 2012 at 10:08:00AM +0800, Fengguang Wu wrote:
> [    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
> [    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
> [    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
> [    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
> [    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
> [    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
> [    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
> [    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
> [    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438

Heh, I found it.  get_pool_nr_running() stores the nr_running array to
use in a local pointer to array and then returns pointer to the
specific element from there depending on the priority.

	atomic_t (*nr_running)[NR_WORKER_POOLS];

	/* set @nr_running to the array to use */
	return nr_running[worker_pool_pri(pool)];

The [] operator in the return statement is indexing to the arrays
instead of the array elements, so if the index is 1, the above
statement offsets nr_running by sizeof(atomic_t [NR_WORKER_POOLS])
instead of sizeof(atomic_t).  This should have been
&(*nr_running)[worker_pool_pri(pool)] instead.

So, highpri ends up dereferencing out-of-bounds and depending on
variable layout, it may see garbage value from the beginning (what you
were seeing) or get interfered afterwards (what Tony was seeing).
This also explains why I didn't see it and Tony can no longer
reproduce it after debug patch.

Will post updated patches.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  3:41               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:41 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	Tony Luck, linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

Hello,

On Fri, Jul 13, 2012 at 10:08:00AM +0800, Fengguang Wu wrote:
> [    0.165669] Performance Events: unsupported Netburst CPU model 6 no PMU driver, software events only.
> [    0.167001] XXX cpu=0 gcwq=ffff88000dc0cfc0 base=ffff88000dc11e80
> [    0.167989] XXX cpu=0 nr_running=0 @ ffff88000dc11e80
> [    0.168988] XXX cpu=0 nr_running=0 @ ffff88000dc11e88
> [    0.169988] XXX cpu=1 gcwq=ffff88000dd0cfc0 base=ffff88000dd11e80
> [    0.170988] XXX cpu=1 nr_running=0 @ ffff88000dd11e80
> [    0.171987] XXX cpu=1 nr_running=0 @ ffff88000dd11e88
> [    0.172988] XXX cpu=8 nr_running=0 @ ffffffff81d7c430
> [    0.173987] XXX cpu=8 nr_running=12 @ ffffffff81d7c438

Heh, I found it.  get_pool_nr_running() stores the nr_running array to
use in a local pointer to array and then returns pointer to the
specific element from there depending on the priority.

	atomic_t (*nr_running)[NR_WORKER_POOLS];

	/* set @nr_running to the array to use */
	return nr_running[worker_pool_pri(pool)];

The [] operator in the return statement is indexing to the arrays
instead of the array elements, so if the index is 1, the above
statement offsets nr_running by sizeof(atomic_t [NR_WORKER_POOLS])
instead of sizeof(atomic_t).  This should have been
&(*nr_running)[worker_pool_pri(pool)] instead.

So, highpri ends up dereferencing out-of-bounds and depending on
variable layout, it may see garbage value from the beginning (what you
were seeing) or get interfered afterwards (what Tony was seeing).
This also explains why I didn't see it and Tony can no longer
reproduce it after debug patch.

Will post updated patches.

Thank you.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-14  3:55     ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From 8a0597bf9939d50039d4a6f446db51cf920daaad Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks!

 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..82eee34 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return &(*nr_running)[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  3:55     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From 8a0597bf9939d50039d4a6f446db51cf920daaad Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks!

 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..82eee34 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return &(*nr_running)[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  3:55     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:55 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	xfs, joshhunt00, davem, vgoyal, johan.hedberg

>From 8a0597bf9939d50039d4a6f446db51cf920daaad Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks!

 kernel/workqueue.c |  225 ++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 155 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..82eee34 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,14 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	atomic_t (*nr_running)[NR_WORKER_POOLS];
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		nr_running = &per_cpu(pool_nr_running, cpu);
 	else
-		return &unbound_gcwq_nr_running;
+		nr_running = &unbound_pool_nr_running;
+
+	return &(*nr_running)[0];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3355,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3394,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3423,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3450,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3496,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3516,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3542,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3585,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3601,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3623,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3644,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3664,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3848,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3870,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3892,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
+
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
 
-		ida_init(&gcwq->pool.worker_ida);
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3923,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-14  3:56     ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tony Luck,
	Fengguang Wu

>From 12f804d130d966f2a094e8037e9f163215d13f23 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks.

 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 82eee34..30d014b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return &(*nr_running)[0];
+	return &(*nr_running)[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  3:56     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen, Tony Luck,
	Fengguang Wu

>From 12f804d130d966f2a094e8037e9f163215d13f23 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks.

 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 82eee34..30d014b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return &(*nr_running)[0];
+	return &(*nr_running)[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  3:56     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  3:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, Fengguang Wu, elder, rni, martin.petersen,
	linux-bluetooth, torvalds, marcel, vwadekar, swhiteho, herbert,
	bpm, Tony Luck, linux-crypto, gustavo, xfs, joshhunt00, davem,
	vgoyal, johan.hedberg

>From 12f804d130d966f2a094e8037e9f163215d13f23 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 20:50:50 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
git branch updated accordingly.  Thanks.

 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 82eee34..30d014b 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -491,7 +497,7 @@ static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 	else
 		nr_running = &unbound_pool_nr_running;
 
-	return &(*nr_running)[0];
+	return &(*nr_running)[worker_pool_pri(pool)];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -588,15 +594,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -623,9 +628,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -894,43 +897,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1070,7 +1036,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1387,6 +1353,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1408,15 +1375,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1763,10 +1732,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1882,21 +1850,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3049,9 +3002,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-14  3:55     ` Tejun Heo
  (?)
@ 2012-07-14  4:27       ` Linus Torvalds
  -1 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  4:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Seeing code like this

+       return &(*nr_running)[0];

just makes me go "WTF?"

Why are you taking the address of something you just dereferenced (the
"& [0]" part).

And you actually do that *twice*, except the inner one is more
complicated. When you assign nr_runing, you take the address of it, so
the "*nr_running" is actually just the same kind of odd thing (except
in reverse - you take dereference something you just took the
address-of).

Seriously, this to me is a sign of *deeply* confused code. And the
fact that your first version of that code was buggy *EXACTLY* due to
this confusion should have made you take a step back.

As far as I can tell, what you actually want that function to do is:

  static atomic_t *get_pool_nr_running(struct worker_pool *pool)
  {
    int cpu = pool->gcwq->cpu;

    if (cpu != WORK_CPU_UNBOUND)
        return per_cpu(pool_nr_running, cpu);

    return unbound_pool_nr_running;
  }

Notice how there isn't an 'address-of' operator anywhere in sight
there. Those things are arrays, they get turned into "atomic_t *"
automatically. And there isn't a single dereference (not a '*', and
not a "[0]" - they are the exact same thing, btw) in sight either.

What am I missing? Are there some new drugs that all the cool kids
chew that I should be trying? Because I really don't think the kinds
of insane "take the address of a dereference" games are a good idea.
They really look to me like somebody is having a really bad drug
experience.

I didn't test the code, btw. I just looked at the patch and went WTF.

                Linus

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  4:27       ` Linus Torvalds
  0 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  4:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Seeing code like this

+       return &(*nr_running)[0];

just makes me go "WTF?"

Why are you taking the address of something you just dereferenced (the
"& [0]" part).

And you actually do that *twice*, except the inner one is more
complicated. When you assign nr_runing, you take the address of it, so
the "*nr_running" is actually just the same kind of odd thing (except
in reverse - you take dereference something you just took the
address-of).

Seriously, this to me is a sign of *deeply* confused code. And the
fact that your first version of that code was buggy *EXACTLY* due to
this confusion should have made you take a step back.

As far as I can tell, what you actually want that function to do is:

  static atomic_t *get_pool_nr_running(struct worker_pool *pool)
  {
    int cpu = pool->gcwq->cpu;

    if (cpu != WORK_CPU_UNBOUND)
        return per_cpu(pool_nr_running, cpu);

    return unbound_pool_nr_running;
  }

Notice how there isn't an 'address-of' operator anywhere in sight
there. Those things are arrays, they get turned into "atomic_t *"
automatically. And there isn't a single dereference (not a '*', and
not a "[0]" - they are the exact same thing, btw) in sight either.

What am I missing? Are there some new drugs that all the cool kids
chew that I should be trying? Because I really don't think the kinds
of insane "take the address of a dereference" games are a good idea.
They really look to me like somebody is having a really bad drug
experience.

I didn't test the code, btw. I just looked at the patch and went WTF.

                Linus

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  4:27       ` Linus Torvalds
  0 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  4:27 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, gustavo,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Seeing code like this

+       return &(*nr_running)[0];

just makes me go "WTF?"

Why are you taking the address of something you just dereferenced (the
"& [0]" part).

And you actually do that *twice*, except the inner one is more
complicated. When you assign nr_runing, you take the address of it, so
the "*nr_running" is actually just the same kind of odd thing (except
in reverse - you take dereference something you just took the
address-of).

Seriously, this to me is a sign of *deeply* confused code. And the
fact that your first version of that code was buggy *EXACTLY* due to
this confusion should have made you take a step back.

As far as I can tell, what you actually want that function to do is:

  static atomic_t *get_pool_nr_running(struct worker_pool *pool)
  {
    int cpu = pool->gcwq->cpu;

    if (cpu != WORK_CPU_UNBOUND)
        return per_cpu(pool_nr_running, cpu);

    return unbound_pool_nr_running;
  }

Notice how there isn't an 'address-of' operator anywhere in sight
there. Those things are arrays, they get turned into "atomic_t *"
automatically. And there isn't a single dereference (not a '*', and
not a "[0]" - they are the exact same thing, btw) in sight either.

What am I missing? Are there some new drugs that all the cool kids
chew that I should be trying? Because I really don't think the kinds
of insane "take the address of a dereference" games are a good idea.
They really look to me like somebody is having a really bad drug
experience.

I didn't test the code, btw. I just looked at the patch and went WTF.

                Linus

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-14  4:27       ` Linus Torvalds
  (?)
@ 2012-07-14  4:44           ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  4:44 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

Hello, Linus.

On Fri, Jul 13, 2012 at 09:27:03PM -0700, Linus Torvalds wrote:
> Seeing code like this
> 
> +       return &(*nr_running)[0];
> 
> just makes me go "WTF?"

I was going WTF too.  This was the smallest fix and I wanted to make
it minimal because there's another stack of patches on top of it.
Planning to just fold nr_running into worker_pool afterwards which
will remove the whole function.

> Why are you taking the address of something you just dereferenced (the
> "& [0]" part).

nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
array part, it's just returning the address of N'th element of the
array.  ARRAY + N == &ARRAY[N].

> And you actually do that *twice*, except the inner one is more
> complicated. When you assign nr_runing, you take the address of it, so
> the "*nr_running" is actually just the same kind of odd thing (except
> in reverse - you take dereference something you just took the
> address-of).
> 
> Seriously, this to me is a sign of *deeply* confused code. And the
> fact that your first version of that code was buggy *EXACTLY* due to
> this confusion should have made you take a step back.

Type-wise, I don't think it's confused.  Ah okay, you're looking at
the fifth patch in isolation.  Upto this point, the index is always 0.
I'm puttin it in as a placeholder for the next patch which makes use
of non-zero index.  This patch is supposed to prepare everything for
multiple pools and thus non-zero index.

> As far as I can tell, what you actually want that function to do is:
> 
>   static atomic_t *get_pool_nr_running(struct worker_pool *pool)
>   {
>     int cpu = pool->gcwq->cpu;
> 
>     if (cpu != WORK_CPU_UNBOUND)
>         return per_cpu(pool_nr_running, cpu);
> 
>     return unbound_pool_nr_running;
>   }

More like the folloiwng in the end.

static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
	int cpu = pool->gcwq->cpu;
	int is_highpri = pool_is_highpri(pool);

	if (cpu != WORK_CPU_UNBOUND)
		return &per_cpu(pool_nr_running, cpu)[is_highpri];

	return &unbound_pool_nr_running[is_highpri];
}

> I didn't test the code, btw. I just looked at the patch and went WTF.

Eh... yeah, with or without [2], this is WTF.  I'll just refresh it
with the above version.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  4:44           ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  4:44 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Hello, Linus.

On Fri, Jul 13, 2012 at 09:27:03PM -0700, Linus Torvalds wrote:
> Seeing code like this
> 
> +       return &(*nr_running)[0];
> 
> just makes me go "WTF?"

I was going WTF too.  This was the smallest fix and I wanted to make
it minimal because there's another stack of patches on top of it.
Planning to just fold nr_running into worker_pool afterwards which
will remove the whole function.

> Why are you taking the address of something you just dereferenced (the
> "& [0]" part).

nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
array part, it's just returning the address of N'th element of the
array.  ARRAY + N == &ARRAY[N].

> And you actually do that *twice*, except the inner one is more
> complicated. When you assign nr_runing, you take the address of it, so
> the "*nr_running" is actually just the same kind of odd thing (except
> in reverse - you take dereference something you just took the
> address-of).
> 
> Seriously, this to me is a sign of *deeply* confused code. And the
> fact that your first version of that code was buggy *EXACTLY* due to
> this confusion should have made you take a step back.

Type-wise, I don't think it's confused.  Ah okay, you're looking at
the fifth patch in isolation.  Upto this point, the index is always 0.
I'm puttin it in as a placeholder for the next patch which makes use
of non-zero index.  This patch is supposed to prepare everything for
multiple pools and thus non-zero index.

> As far as I can tell, what you actually want that function to do is:
> 
>   static atomic_t *get_pool_nr_running(struct worker_pool *pool)
>   {
>     int cpu = pool->gcwq->cpu;
> 
>     if (cpu != WORK_CPU_UNBOUND)
>         return per_cpu(pool_nr_running, cpu);
> 
>     return unbound_pool_nr_running;
>   }

More like the folloiwng in the end.

static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
	int cpu = pool->gcwq->cpu;
	int is_highpri = pool_is_highpri(pool);

	if (cpu != WORK_CPU_UNBOUND)
		return &per_cpu(pool_nr_running, cpu)[is_highpri];

	return &unbound_pool_nr_running[is_highpri];
}

> I didn't test the code, btw. I just looked at the patch and went WTF.

Eh... yeah, with or without [2], this is WTF.  I'll just refresh it
with the above version.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  4:44           ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  4:44 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, gustavo,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Hello, Linus.

On Fri, Jul 13, 2012 at 09:27:03PM -0700, Linus Torvalds wrote:
> Seeing code like this
> 
> +       return &(*nr_running)[0];
> 
> just makes me go "WTF?"

I was going WTF too.  This was the smallest fix and I wanted to make
it minimal because there's another stack of patches on top of it.
Planning to just fold nr_running into worker_pool afterwards which
will remove the whole function.

> Why are you taking the address of something you just dereferenced (the
> "& [0]" part).

nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
array part, it's just returning the address of N'th element of the
array.  ARRAY + N == &ARRAY[N].

> And you actually do that *twice*, except the inner one is more
> complicated. When you assign nr_runing, you take the address of it, so
> the "*nr_running" is actually just the same kind of odd thing (except
> in reverse - you take dereference something you just took the
> address-of).
> 
> Seriously, this to me is a sign of *deeply* confused code. And the
> fact that your first version of that code was buggy *EXACTLY* due to
> this confusion should have made you take a step back.

Type-wise, I don't think it's confused.  Ah okay, you're looking at
the fifth patch in isolation.  Upto this point, the index is always 0.
I'm puttin it in as a placeholder for the next patch which makes use
of non-zero index.  This patch is supposed to prepare everything for
multiple pools and thus non-zero index.

> As far as I can tell, what you actually want that function to do is:
> 
>   static atomic_t *get_pool_nr_running(struct worker_pool *pool)
>   {
>     int cpu = pool->gcwq->cpu;
> 
>     if (cpu != WORK_CPU_UNBOUND)
>         return per_cpu(pool_nr_running, cpu);
> 
>     return unbound_pool_nr_running;
>   }

More like the folloiwng in the end.

static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
	int cpu = pool->gcwq->cpu;
	int is_highpri = pool_is_highpri(pool);

	if (cpu != WORK_CPU_UNBOUND)
		return &per_cpu(pool_nr_running, cpu)[is_highpri];

	return &unbound_pool_nr_running[is_highpri];
}

> I didn't test the code, btw. I just looked at the patch and went WTF.

Eh... yeah, with or without [2], this is WTF.  I'll just refresh it
with the above version.

Thanks.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-14  4:44           ` Tejun Heo
  (?)
@ 2012-07-14  5:00             ` Linus Torvalds
  -1 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  5:00 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
>
> nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> array part, it's just returning the address of N'th element of the
> array.  ARRAY + N == &ARRAY[N].

None of this matters one whit.

You did "&(x)[0]".

That's insane. It's crazy. It doesn't even matter what "x" is in
between, it's crazy regardless.

It's just a really confused way of saying "x" (*). Except it makes the
code look like an insane monkey on crack got a-hold of your keyboard
when you weren't looking.

And to make it worse, "x" itself was the result of doing "*&y". Which
was probably written by the insane monkey's older brother, Max, who
has been chewing Quaaludes for a few years, and as a result _his_
brain really isn't doing too well either. Even for a monkey. And now
you're letting *him* at your keyboard too?

So you had two separately (but similarly) insane ways of complicating
the code so that it was really obfuscated. When it really just
computed "y" to begin with, it just added all those "x=*&y" and
"&(x)[0]" games around it to make it look complicated.

            Linus

(*) Technically, "&(x)[0]" is actually a really confused way of saying
"(x+0)" while making sure that "x" was a valid pointer. It basically
guarantees that if "x" started out as an array, it has now been
demoted to a pointer - but since arrays will be demoted to pointers by
pretty much any subsequent operation except for "sizeof()" and a
couple of other special cases anyway, you can pretty much just say
that "&(x)[0]" is "(x+0)" is "x".

And "*&y" really is exactly the same as "y", except for again some
syntactic checking (ie it is basically an odd way to verify that "y"
is an lvalue, since you cannot do an address-of of a non-lvalue).

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:00             ` Linus Torvalds
  0 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  5:00 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
>
> nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> array part, it's just returning the address of N'th element of the
> array.  ARRAY + N == &ARRAY[N].

None of this matters one whit.

You did "&(x)[0]".

That's insane. It's crazy. It doesn't even matter what "x" is in
between, it's crazy regardless.

It's just a really confused way of saying "x" (*). Except it makes the
code look like an insane monkey on crack got a-hold of your keyboard
when you weren't looking.

And to make it worse, "x" itself was the result of doing "*&y". Which
was probably written by the insane monkey's older brother, Max, who
has been chewing Quaaludes for a few years, and as a result _his_
brain really isn't doing too well either. Even for a monkey. And now
you're letting *him* at your keyboard too?

So you had two separately (but similarly) insane ways of complicating
the code so that it was really obfuscated. When it really just
computed "y" to begin with, it just added all those "x=*&y" and
"&(x)[0]" games around it to make it look complicated.

            Linus

(*) Technically, "&(x)[0]" is actually a really confused way of saying
"(x+0)" while making sure that "x" was a valid pointer. It basically
guarantees that if "x" started out as an array, it has now been
demoted to a pointer - but since arrays will be demoted to pointers by
pretty much any subsequent operation except for "sizeof()" and a
couple of other special cases anyway, you can pretty much just say
that "&(x)[0]" is "(x+0)" is "x".

And "*&y" really is exactly the same as "y", except for again some
syntactic checking (ie it is basically an odd way to verify that "y"
is an lvalue, since you cannot do an address-of of a non-lvalue).

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:00             ` Linus Torvalds
  0 siblings, 0 replies; 96+ messages in thread
From: Linus Torvalds @ 2012-07-14  5:00 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, gustavo,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, xfs, joshhunt00, davem, vgoyal, johan.hedberg

On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
>
> nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> array part, it's just returning the address of N'th element of the
> array.  ARRAY + N == &ARRAY[N].

None of this matters one whit.

You did "&(x)[0]".

That's insane. It's crazy. It doesn't even matter what "x" is in
between, it's crazy regardless.

It's just a really confused way of saying "x" (*). Except it makes the
code look like an insane monkey on crack got a-hold of your keyboard
when you weren't looking.

And to make it worse, "x" itself was the result of doing "*&y". Which
was probably written by the insane monkey's older brother, Max, who
has been chewing Quaaludes for a few years, and as a result _his_
brain really isn't doing too well either. Even for a monkey. And now
you're letting *him* at your keyboard too?

So you had two separately (but similarly) insane ways of complicating
the code so that it was really obfuscated. When it really just
computed "y" to begin with, it just added all those "x=*&y" and
"&(x)[0]" games around it to make it look complicated.

            Linus

(*) Technically, "&(x)[0]" is actually a really confused way of saying
"(x+0)" while making sure that "x" was a valid pointer. It basically
guarantees that if "x" started out as an array, it has now been
demoted to a pointer - but since arrays will be demoted to pointers by
pretty much any subsequent operation except for "sizeof()" and a
couple of other special cases anyway, you can pretty much just say
that "&(x)[0]" is "(x+0)" is "x".

And "*&y" really is exactly the same as "y", except for again some
syntactic checking (ie it is basically an odd way to verify that "y"
is an lvalue, since you cannot do an address-of of a non-lvalue).

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-14  5:00             ` Linus Torvalds
  (?)
@ 2012-07-14  5:07               ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Hey, Linus.

On Fri, Jul 13, 2012 at 10:00:10PM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
> >
> > nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> > array part, it's just returning the address of N'th element of the
> > array.  ARRAY + N == &ARRAY[N].
> 
> None of this matters one whit.
> 
> You did "&(x)[0]".
> 
> That's insane. It's crazy. It doesn't even matter what "x" is in
> between, it's crazy regardless.

Eh, from my previous reply.

| Ah okay, you're looking at the fifth patch in isolation.  Upto this
| point, the index is always 0.  I'm puttin it in as a placeholder for
| the next patch which makes use of non-zero index.  This patch is
| supposed to prepare everything for multiple pools and thus non-zero
| index.

The patch is about converting stuff to handle size-1 array without
introducing any actual behavior change so that the next patch can bump
the array size and just change the index.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:07               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: linux-kernel, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

Hey, Linus.

On Fri, Jul 13, 2012 at 10:00:10PM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
> >
> > nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> > array part, it's just returning the address of N'th element of the
> > array.  ARRAY + N == &ARRAY[N].
> 
> None of this matters one whit.
> 
> You did "&(x)[0]".
> 
> That's insane. It's crazy. It doesn't even matter what "x" is in
> between, it's crazy regardless.

Eh, from my previous reply.

| Ah okay, you're looking at the fifth patch in isolation.  Upto this
| point, the index is always 0.  I'm puttin it in as a placeholder for
| the next patch which makes use of non-zero index.  This patch is
| supposed to prepare everything for multiple pools and thus non-zero
| index.

The patch is about converting stuff to handle size-1 array without
introducing any actual behavior change so that the next patch can bump
the array size and just change the index.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:07               ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:07 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, gustavo,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	linux-crypto, xfs, joshhunt00, davem, vgoyal, johan.hedberg

Hey, Linus.

On Fri, Jul 13, 2012 at 10:00:10PM -0700, Linus Torvalds wrote:
> On Fri, Jul 13, 2012 at 9:44 PM, Tejun Heo <tj@kernel.org> wrote:
> >
> > nr_running is atomic_t (*nr_running)[2].  Ignoring the pointer to
> > array part, it's just returning the address of N'th element of the
> > array.  ARRAY + N == &ARRAY[N].
> 
> None of this matters one whit.
> 
> You did "&(x)[0]".
> 
> That's insane. It's crazy. It doesn't even matter what "x" is in
> between, it's crazy regardless.

Eh, from my previous reply.

| Ah okay, you're looking at the fifth patch in isolation.  Upto this
| point, the index is always 0.  I'm puttin it in as a placeholder for
| the next patch which makes use of non-zero index.  This patch is
| supposed to prepare everything for multiple pools and thus non-zero
| index.

The patch is about converting stuff to handle size-1 array without
introducing any actual behavior change so that the next patch can bump
the array size and just change the index.

Thanks.

-- 
tejun

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-14  5:21       ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:21 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	joshhunt00-Re5JQEeQqe8AvxtiuMwx3w, axboe-tSWWG44O7X1aa/9Udqfwiw,
	rni-hpIqsD4AKlfQT0dZR+AlfA, vgoyal-H+wXaHxf7aLQT0dZR+AlfA,
	vwadekar-DDmLM1+adcrQT0dZR+AlfA,
	herbert-lOAM2aK0SrRLBo1qDEOMRrpzq4S04n8Q,
	davem-fT/PcQaiUtIeIZ0/mPfg9Q,
	linux-crypto-u79uwXL29TY76Z2rM5mHXA,
	swhiteho-H+wXaHxf7aLQT0dZR+AlfA, bpm-sJ/iWh9BUns,
	elder-DgEjT+Ai2ygdnm+yROfE0A, xfs-VZNHf3L845pBDgjK7y7TUQ,
	marcel-kz+m5ild9QBg9hUCZPvPmw, gustavo-THi1TnShQwVAfugRpC6u6w,
	johan.hedberg-Re5JQEeQqe8AvxtiuMwx3w,
	linux-bluetooth-u79uwXL29TY76Z2rM5mHXA,
	martin.petersen-QHcLZuEGTsvQT0dZR+AlfA

>From 4ce62e9e30cacc26885cab133ad1de358dd79f21 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Date: Fri, 13 Jul 2012 22:16:44 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.  Note
that get_pool_nr_running() currently assumes 0 index.  The next patch
will make use of non-zero index.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Pointer to array is stupid.  Don't use it in get_pool_nr_running()
    as suggested by Linus.

Signed-off-by: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Tony Luck <tony.luck-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Fengguang Wu <fengguang.wu-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: Linus Torvalds <torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
---
So, the same 0 index silliness but this shouldn't be as fugly.

Thanks.

 kernel/workqueue.c |  223 +++++++++++++++++++++++++++++++++++----------------
 1 files changed, 153 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..b0daaea 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,12 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	int idx = 0;
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		return &per_cpu(pool_nr_running, cpu)[idx];
 	else
-		return &unbound_gcwq_nr_running;
+		return &unbound_pool_nr_running[idx];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3353,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3392,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3421,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3448,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3494,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3514,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3540,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3583,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3599,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3621,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3642,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3662,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3846,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3868,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3890,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		ida_init(&gcwq->pool.worker_ida);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
+
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3921,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:21       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From 4ce62e9e30cacc26885cab133ad1de358dd79f21 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 22:16:44 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.  Note
that get_pool_nr_running() currently assumes 0 index.  The next patch
will make use of non-zero index.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Pointer to array is stupid.  Don't use it in get_pool_nr_running()
    as suggested by Linus.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
---
So, the same 0 index silliness but this shouldn't be as fugly.

Thanks.

 kernel/workqueue.c |  223 +++++++++++++++++++++++++++++++++++----------------
 1 files changed, 153 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..b0daaea 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,12 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	int idx = 0;
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		return &per_cpu(pool_nr_running, cpu)[idx];
 	else
-		return &unbound_gcwq_nr_running;
+		return &unbound_pool_nr_running[idx];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3353,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3392,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3421,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3448,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3494,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3514,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3540,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3583,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3599,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3621,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3642,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3662,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3846,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3868,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3890,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		ida_init(&gcwq->pool.worker_ida);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
+
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3921,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
@ 2012-07-14  5:21       ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	xfs, joshhunt00, davem, vgoyal, johan.hedberg

>From 4ce62e9e30cacc26885cab133ad1de358dd79f21 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 22:16:44 -0700

Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
paths which need to manipulate all pools in a gcwq to use them.
NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
over only @gcwq->pool.

Note that nr_running is per-pool property and converted to an array
with NR_WORKER_POOLS elements and renamed to pool_nr_running.  Note
that get_pool_nr_running() currently assumes 0 index.  The next patch
will make use of non-zero index.

The changes in this patch are mechanical and don't caues any
functional difference.  This is to prepare for multiple pools per
gcwq.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Pointer to array is stupid.  Don't use it in get_pool_nr_running()
    as suggested by Linus.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
---
So, the same 0 index silliness but this shouldn't be as fugly.

Thanks.

 kernel/workqueue.c |  223 +++++++++++++++++++++++++++++++++++----------------
 1 files changed, 153 insertions(+), 70 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a98bae..b0daaea 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -74,6 +74,8 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
+	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
 	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
@@ -274,6 +276,9 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #define CREATE_TRACE_POINTS
 #include <trace/events/workqueue.h>
 
+#define for_each_worker_pool(pool, gcwq)				\
+	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
 		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -454,7 +459,7 @@ static bool workqueue_freezing;		/* W: have wqs started freezing? */
  * try_to_wake_up().  Put it in a separate cacheline.
  */
 static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
 
 /*
  * Global cpu workqueue and nr_running counter for unbound gcwq.  The
@@ -462,7 +467,9 @@ static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
  * workers have WORKER_UNBOUND set.
  */
 static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0);	/* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+	[0 ... NR_WORKER_POOLS - 1]	= ATOMIC_INIT(0),	/* always 0 */
+};
 
 static int worker_thread(void *__worker);
 
@@ -477,11 +484,12 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
+	int idx = 0;
 
 	if (cpu != WORK_CPU_UNBOUND)
-		return &per_cpu(gcwq_nr_running, cpu);
+		return &per_cpu(pool_nr_running, cpu)[idx];
 	else
-		return &unbound_gcwq_nr_running;
+		return &unbound_pool_nr_running[idx];
 }
 
 static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -3345,9 +3353,30 @@ EXPORT_SYMBOL_GPL(work_busy);
 	__ret1 < 0 ? -1 : 0;						\
 })
 
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (pool->flags & POOL_MANAGING_WORKERS)
+			return true;
+	return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+	struct worker_pool *pool;
+
+	for_each_worker_pool(pool, gcwq)
+		if (!list_empty(&pool->idle_list))
+			return true;
+	return false;
+}
+
 static int __cpuinit trustee_thread(void *__gcwq)
 {
 	struct global_cwq *gcwq = __gcwq;
+	struct worker_pool *pool;
 	struct worker *worker;
 	struct work_struct *work;
 	struct hlist_node *pos;
@@ -3363,13 +3392,15 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * cancelled.
 	 */
 	BUG_ON(gcwq->cpu != smp_processor_id());
-	rc = trustee_wait_event(!(gcwq->pool.flags & POOL_MANAGING_WORKERS));
+	rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
 	BUG_ON(rc < 0);
 
-	gcwq->pool.flags |= POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq) {
+		pool->flags |= POOL_MANAGING_WORKERS;
 
-	list_for_each_entry(worker, &gcwq->pool.idle_list, entry)
-		worker->flags |= WORKER_ROGUE;
+		list_for_each_entry(worker, &pool->idle_list, entry)
+			worker->flags |= WORKER_ROGUE;
+	}
 
 	for_each_busy_worker(worker, i, pos, gcwq)
 		worker->flags |= WORKER_ROGUE;
@@ -3390,10 +3421,12 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * keep_working() are always true as long as the worklist is
 	 * not empty.
 	 */
-	atomic_set(get_pool_nr_running(&gcwq->pool), 0);
+	for_each_worker_pool(pool, gcwq)
+		atomic_set(get_pool_nr_running(pool), 0);
 
 	spin_unlock_irq(&gcwq->lock);
-	del_timer_sync(&gcwq->pool.idle_timer);
+	for_each_worker_pool(pool, gcwq)
+		del_timer_sync(&pool->idle_timer);
 	spin_lock_irq(&gcwq->lock);
 
 	/*
@@ -3415,29 +3448,38 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * may be frozen works in freezable cwqs.  Don't declare
 	 * completion while frozen.
 	 */
-	while (gcwq->pool.nr_workers != gcwq->pool.nr_idle ||
-	       gcwq->flags & GCWQ_FREEZING ||
-	       gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
-		int nr_works = 0;
+	while (true) {
+		bool busy = false;
 
-		list_for_each_entry(work, &gcwq->pool.worklist, entry) {
-			send_mayday(work);
-			nr_works++;
-		}
+		for_each_worker_pool(pool, gcwq)
+			busy |= pool->nr_workers != pool->nr_idle;
 
-		list_for_each_entry(worker, &gcwq->pool.idle_list, entry) {
-			if (!nr_works--)
-				break;
-			wake_up_process(worker->task);
-		}
+		if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+		    gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+			break;
 
-		if (need_to_create_worker(&gcwq->pool)) {
-			spin_unlock_irq(&gcwq->lock);
-			worker = create_worker(&gcwq->pool, false);
-			spin_lock_irq(&gcwq->lock);
-			if (worker) {
-				worker->flags |= WORKER_ROGUE;
-				start_worker(worker);
+		for_each_worker_pool(pool, gcwq) {
+			int nr_works = 0;
+
+			list_for_each_entry(work, &pool->worklist, entry) {
+				send_mayday(work);
+				nr_works++;
+			}
+
+			list_for_each_entry(worker, &pool->idle_list, entry) {
+				if (!nr_works--)
+					break;
+				wake_up_process(worker->task);
+			}
+
+			if (need_to_create_worker(pool)) {
+				spin_unlock_irq(&gcwq->lock);
+				worker = create_worker(pool, false);
+				spin_lock_irq(&gcwq->lock);
+				if (worker) {
+					worker->flags |= WORKER_ROGUE;
+					start_worker(worker);
+				}
 			}
 		}
 
@@ -3452,11 +3494,18 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * all workers till we're canceled.
 	 */
 	do {
-		rc = trustee_wait_event(!list_empty(&gcwq->pool.idle_list));
-		while (!list_empty(&gcwq->pool.idle_list))
-			destroy_worker(list_first_entry(&gcwq->pool.idle_list,
-							struct worker, entry));
-	} while (gcwq->pool.nr_workers && rc >= 0);
+		rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			while (!list_empty(&pool->idle_list)) {
+				worker = list_first_entry(&pool->idle_list,
+							  struct worker, entry);
+				destroy_worker(worker);
+			}
+			i |= pool->nr_workers;
+		}
+	} while (i && rc >= 0);
 
 	/*
 	 * At this point, either draining has completed and no worker
@@ -3465,7 +3514,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	 * Tell the remaining busy ones to rebind once it finishes the
 	 * currently scheduled works by scheduling the rebind_work.
 	 */
-	WARN_ON(!list_empty(&gcwq->pool.idle_list));
+	for_each_worker_pool(pool, gcwq)
+		WARN_ON(!list_empty(&pool->idle_list));
 
 	for_each_busy_worker(worker, i, pos, gcwq) {
 		struct work_struct *rebind_work = &worker->rebind_work;
@@ -3490,7 +3540,8 @@ static int __cpuinit trustee_thread(void *__gcwq)
 	}
 
 	/* relinquish manager role */
-	gcwq->pool.flags &= ~POOL_MANAGING_WORKERS;
+	for_each_worker_pool(pool, gcwq)
+		pool->flags &= ~POOL_MANAGING_WORKERS;
 
 	/* notify completion */
 	gcwq->trustee = NULL;
@@ -3532,8 +3583,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 	unsigned int cpu = (unsigned long)hcpu;
 	struct global_cwq *gcwq = get_gcwq(cpu);
 	struct task_struct *new_trustee = NULL;
-	struct worker *uninitialized_var(new_worker);
+	struct worker *new_workers[NR_WORKER_POOLS] = { };
+	struct worker_pool *pool;
 	unsigned long flags;
+	int i;
 
 	action &= ~CPU_TASKS_FROZEN;
 
@@ -3546,12 +3599,12 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		kthread_bind(new_trustee, cpu);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		new_worker = create_worker(&gcwq->pool, false);
-		if (!new_worker) {
-			if (new_trustee)
-				kthread_stop(new_trustee);
-			return NOTIFY_BAD;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			new_workers[i] = create_worker(pool, false);
+			if (!new_workers[i++])
+				goto err_destroy;
 		}
 	}
 
@@ -3568,8 +3621,11 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
 		/* fall through */
 	case CPU_UP_PREPARE:
-		BUG_ON(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = new_worker;
+		i = 0;
+		for_each_worker_pool(pool, gcwq) {
+			BUG_ON(pool->first_idle);
+			pool->first_idle = new_workers[i++];
+		}
 		break;
 
 	case CPU_DYING:
@@ -3586,8 +3642,10 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		gcwq->trustee_state = TRUSTEE_BUTCHER;
 		/* fall through */
 	case CPU_UP_CANCELED:
-		destroy_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			destroy_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 
 	case CPU_DOWN_FAILED:
@@ -3604,18 +3662,32 @@ static int __devinit workqueue_cpu_callback(struct notifier_block *nfb,
 		 * Put the first_idle in and request a real manager to
 		 * take a look.
 		 */
-		spin_unlock_irq(&gcwq->lock);
-		kthread_bind(gcwq->pool.first_idle->task, cpu);
-		spin_lock_irq(&gcwq->lock);
-		gcwq->pool.flags |= POOL_MANAGE_WORKERS;
-		start_worker(gcwq->pool.first_idle);
-		gcwq->pool.first_idle = NULL;
+		for_each_worker_pool(pool, gcwq) {
+			spin_unlock_irq(&gcwq->lock);
+			kthread_bind(pool->first_idle->task, cpu);
+			spin_lock_irq(&gcwq->lock);
+			pool->flags |= POOL_MANAGE_WORKERS;
+			start_worker(pool->first_idle);
+			pool->first_idle = NULL;
+		}
 		break;
 	}
 
 	spin_unlock_irqrestore(&gcwq->lock, flags);
 
 	return notifier_from_errno(0);
+
+err_destroy:
+	if (new_trustee)
+		kthread_stop(new_trustee);
+
+	spin_lock_irqsave(&gcwq->lock, flags);
+	for (i = 0; i < NR_WORKER_POOLS; i++)
+		if (new_workers[i])
+			destroy_worker(new_workers[i]);
+	spin_unlock_irqrestore(&gcwq->lock, flags);
+
+	return NOTIFY_BAD;
 }
 
 #ifdef CONFIG_SMP
@@ -3774,6 +3846,7 @@ void thaw_workqueues(void)
 
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 		struct workqueue_struct *wq;
 
 		spin_lock_irq(&gcwq->lock);
@@ -3795,7 +3868,8 @@ void thaw_workqueues(void)
 				cwq_activate_first_delayed(cwq);
 		}
 
-		wake_up_worker(&gcwq->pool);
+		for_each_worker_pool(pool, gcwq)
+			wake_up_worker(pool);
 
 		spin_unlock_irq(&gcwq->lock);
 	}
@@ -3816,25 +3890,29 @@ static int __init init_workqueues(void)
 	/* initialize gcwqs */
 	for_each_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		struct worker_pool *pool;
 
 		spin_lock_init(&gcwq->lock);
-		gcwq->pool.gcwq = gcwq;
-		INIT_LIST_HEAD(&gcwq->pool.worklist);
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		INIT_LIST_HEAD(&gcwq->pool.idle_list);
 		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
 			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
 
-		init_timer_deferrable(&gcwq->pool.idle_timer);
-		gcwq->pool.idle_timer.function = idle_worker_timeout;
-		gcwq->pool.idle_timer.data = (unsigned long)&gcwq->pool;
+		for_each_worker_pool(pool, gcwq) {
+			pool->gcwq = gcwq;
+			INIT_LIST_HEAD(&pool->worklist);
+			INIT_LIST_HEAD(&pool->idle_list);
 
-		setup_timer(&gcwq->pool.mayday_timer, gcwq_mayday_timeout,
-			    (unsigned long)&gcwq->pool);
+			init_timer_deferrable(&pool->idle_timer);
+			pool->idle_timer.function = idle_worker_timeout;
+			pool->idle_timer.data = (unsigned long)pool;
 
-		ida_init(&gcwq->pool.worker_ida);
+			setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+				    (unsigned long)pool);
+
+			ida_init(&pool->worker_ida);
+		}
 
 		gcwq->trustee_state = TRUSTEE_DONE;
 		init_waitqueue_head(&gcwq->trustee_wait);
@@ -3843,15 +3921,20 @@ static int __init init_workqueues(void)
 	/* create the initial worker */
 	for_each_online_gcwq_cpu(cpu) {
 		struct global_cwq *gcwq = get_gcwq(cpu);
-		struct worker *worker;
+		struct worker_pool *pool;
 
 		if (cpu != WORK_CPU_UNBOUND)
 			gcwq->flags &= ~GCWQ_DISASSOCIATED;
-		worker = create_worker(&gcwq->pool, true);
-		BUG_ON(!worker);
-		spin_lock_irq(&gcwq->lock);
-		start_worker(worker);
-		spin_unlock_irq(&gcwq->lock);
+
+		for_each_worker_pool(pool, gcwq) {
+			struct worker *worker;
+
+			worker = create_worker(pool, true);
+			BUG_ON(!worker);
+			spin_lock_irq(&gcwq->lock);
+			start_worker(worker);
+			spin_unlock_irq(&gcwq->lock);
+		}
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED v3 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-09 18:41   ` Tejun Heo
  (?)
@ 2012-07-14  5:24     ` Tejun Heo
  -1 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From a465fcee388d62d22e390b57c81ca8411f25a1da Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 22:16:45 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Refreshed for the get_pool_nr_running() update in the previous
    patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b0daaea..4fa9e35 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -484,7 +490,7 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
-	int idx = 0;
+	int idx = worker_pool_pri(pool);
 
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(pool_nr_running, cpu)[idx];
@@ -586,15 +592,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -621,9 +626,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -892,43 +895,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1068,7 +1034,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1385,6 +1351,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1406,15 +1373,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1761,10 +1730,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1880,21 +1848,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3047,9 +3000,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED v3 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  5:24     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar, herbert,
	davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel, gustavo,
	johan.hedberg, linux-bluetooth, martin.petersen

>From a465fcee388d62d22e390b57c81ca8411f25a1da Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 22:16:45 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Refreshed for the get_pool_nr_running() update in the previous
    patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b0daaea..4fa9e35 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -484,7 +490,7 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
-	int idx = 0;
+	int idx = worker_pool_pri(pool);
 
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(pool_nr_running, cpu)[idx];
@@ -586,15 +592,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -621,9 +626,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -892,43 +895,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1068,7 +1034,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1385,6 +1351,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1406,15 +1373,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1761,10 +1730,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1880,21 +1848,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3047,9 +3000,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 96+ messages in thread

* [PATCH UPDATED v3 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  5:24     ` Tejun Heo
  0 siblings, 0 replies; 96+ messages in thread
From: Tejun Heo @ 2012-07-14  5:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, vwadekar, swhiteho, herbert, bpm, linux-crypto, gustavo,
	xfs, joshhunt00, davem, vgoyal, johan.hedberg

>From a465fcee388d62d22e390b57c81ca8411f25a1da Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Fri, 13 Jul 2012 22:16:45 -0700

WQ_HIGHPRI was implemented by queueing highpri work items at the head
of the global worklist.  Other than queueing at the head, they weren't
handled differently; unfortunately, this could lead to execution
latency of a few seconds on heavily loaded systems.

Now that workqueue code has been updated to deal with multiple
worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
gcwq->pools[0] is used for normal pri work items and ->pools[1] for
highpri.  Highpri workers get -20 nice level and has 'H' suffix in
their names.  Note that this change increases the number of kworkers
per cpu.

POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
wakeup code in process_one_work() are no longer used and removed.

This allows proper prioritization of highpri work items and removes
high execution latency of highpri work items.

v2: nr_running indexing bug in get_pool_nr_running() fixed.

v3: Refreshed for the get_pool_nr_running() update in the previous
    patch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Josh Hunt <joshhunt00@gmail.com>
LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
---
 Documentation/workqueue.txt |  103 ++++++++++++++++---------------------------
 kernel/workqueue.c          |  100 +++++++++++------------------------------
 2 files changed, 65 insertions(+), 138 deletions(-)

diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@ called thread-pools.
 
 The cmwq design differentiates between the user-facing workqueues that
 subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
 
 The backend is called gcwq.  There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues.  Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
 
 Subsystems and drivers can create and queue work items through special
 workqueue API functions as they see fit. They can influence some
 aspects of the way the work items are executed by setting flags on the
 workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more.  To get a detailed overview refer to the API description of
 alloc_workqueue() below.
 
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq.  For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool.  For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
 
 For any worker pool implementation, managing the concurrency level
 (how many execution contexts are active) is an important issue.  cmwq
@@ -115,26 +118,26 @@ tries to keep the concurrency at a minimal but sufficient level.
 Minimal to save resources and sufficient in that the system is used at
 its full capacity.
 
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler.  The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers.  Generally, work items are not expected to
-hog a CPU and consume many cycles.  That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal.  As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items.  This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler.  The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers.  Generally, work items are
+not expected to hog a CPU and consume many cycles.  That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal.  As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items.  This allows using a minimal number of workers
+without losing execution bandwidth.
 
 Keeping idle workers around doesn't cost other than the memory space
 for kthreads, so cmwq holds onto idle ones for a while before killing
 them.
 
 For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible.  The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible.  The responsibility of regulating
 concurrency level is on the users.  There is also a flag to mark a
 bound wq to ignore the concurrency management.  Please refer to the
 API section for details.
@@ -205,31 +208,22 @@ resources, scheduled and executed.
 
   WQ_HIGHPRI
 
-	Work items of a highpri wq are queued at the head of the
-	worklist of the target gcwq and start execution regardless of
-	the current concurrency level.  In other words, highpri work
-	items will always start execution as soon as execution
-	resource is available.
+	Work items of a highpri wq are queued to the highpri
+	thread-pool of the target gcwq.  Highpri thread-pools are
+	served by worker threads with elevated nice level.
 
-	Ordering among highpri work items is preserved - a highpri
-	work item queued after another highpri work item will start
-	execution after the earlier highpri work item starts.
-
-	Although highpri work items are not held back by other
-	runnable work items, they still contribute to the concurrency
-	level.  Highpri work items in runnable state will prevent
-	non-highpri work items from starting execution.
-
-	This flag is meaningless for unbound wq.
+	Note that normal and highpri thread-pools don't interact with
+	each other.  Each maintain its separate pool of workers and
+	implements concurrency management among its workers.
 
   WQ_CPU_INTENSIVE
 
 	Work items of a CPU intensive wq do not contribute to the
 	concurrency level.  In other words, runnable CPU intensive
-	work items will not prevent other work items from starting
-	execution.  This is useful for bound work items which are
-	expected to hog CPU cycles so that their execution is
-	regulated by the system scheduler.
+	work items will not prevent other work items in the same
+	thread-pool from starting execution.  This is useful for bound
+	work items which are expected to hog CPU cycles so that their
+	execution is regulated by the system scheduler.
 
 	Although CPU intensive work items don't contribute to the
 	concurrency level, start of their executions is still
@@ -239,14 +233,6 @@ resources, scheduled and executed.
 
 	This flag is meaningless for unbound wq.
 
-  WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
-	This combination makes the wq avoid interaction with
-	concurrency management completely and behave as a simple
-	per-CPU execution context provider.  Work items queued on a
-	highpri CPU-intensive wq start execution as soon as resources
-	are available and don't affect execution of other work items.
-
 @max_active:
 
 @max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@ If @max_active == 2,
  35		w2 wakes up and finishes
 
 Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS	EVENT
- 0		w1 and w2 start and burn CPU
- 5		w1 sleeps
- 10		w2 sleeps
- 10		w0 starts and burns CPU
- 15		w0 sleeps
- 15		w1 wakes up and finishes
- 20		w2 wakes up and finishes
- 25		w0 wakes up and burns CPU
- 30		w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
 
  TIME IN MSECS	EVENT
  0		w0 starts and burns CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b0daaea..4fa9e35 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -52,7 +52,6 @@ enum {
 	/* pool flags */
 	POOL_MANAGE_WORKERS	= 1 << 0,	/* need to manage workers */
 	POOL_MANAGING_WORKERS	= 1 << 1,	/* managing workers */
-	POOL_HIGHPRI_PENDING	= 1 << 2,	/* highpri works on queue */
 
 	/* worker flags */
 	WORKER_STARTED		= 1 << 0,	/* started */
@@ -74,7 +73,7 @@ enum {
 	TRUSTEE_RELEASE		= 3,		/* release workers */
 	TRUSTEE_DONE		= 4,		/* trustee is done */
 
-	NR_WORKER_POOLS		= 1,		/* # worker pools per gcwq */
+	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
 	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
@@ -95,6 +94,7 @@ enum {
 	 * all cpus.  Give -20.
 	 */
 	RESCUER_NICE_LEVEL	= -20,
+	HIGHPRI_NICE_LEVEL	= -20,
 };
 
 /*
@@ -174,7 +174,7 @@ struct global_cwq {
 	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
 						/* L: hash of busy workers */
 
-	struct worker_pool	pool;		/* the worker pools */
+	struct worker_pool	pools[2];	/* normal and highpri pools */
 
 	struct task_struct	*trustee;	/* L: for gcwq shutdown */
 	unsigned int		trustee_state;	/* L: trustee state */
@@ -277,7 +277,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 #include <trace/events/workqueue.h>
 
 #define for_each_worker_pool(pool, gcwq)				\
-	for ((pool) = &(gcwq)->pool; (pool); (pool) = NULL)
+	for ((pool) = &(gcwq)->pools[0];				\
+	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
 	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
@@ -473,6 +474,11 @@ static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
 
 static int worker_thread(void *__worker);
 
+static int worker_pool_pri(struct worker_pool *pool)
+{
+	return pool - pool->gcwq->pools;
+}
+
 static struct global_cwq *get_gcwq(unsigned int cpu)
 {
 	if (cpu != WORK_CPU_UNBOUND)
@@ -484,7 +490,7 @@ static struct global_cwq *get_gcwq(unsigned int cpu)
 static atomic_t *get_pool_nr_running(struct worker_pool *pool)
 {
 	int cpu = pool->gcwq->cpu;
-	int idx = 0;
+	int idx = worker_pool_pri(pool);
 
 	if (cpu != WORK_CPU_UNBOUND)
 		return &per_cpu(pool_nr_running, cpu)[idx];
@@ -586,15 +592,14 @@ static struct global_cwq *get_work_gcwq(struct work_struct *work)
 }
 
 /*
- * Policy functions.  These define the policies on how the global
- * worker pool is managed.  Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions.  These define the policies on how the global worker
+ * pools are managed.  Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
  */
 
 static bool __need_more_worker(struct worker_pool *pool)
 {
-	return !atomic_read(get_pool_nr_running(pool)) ||
-		(pool->flags & POOL_HIGHPRI_PENDING);
+	return !atomic_read(get_pool_nr_running(pool));
 }
 
 /*
@@ -621,9 +626,7 @@ static bool keep_working(struct worker_pool *pool)
 {
 	atomic_t *nr_running = get_pool_nr_running(pool);
 
-	return !list_empty(&pool->worklist) &&
-		(atomic_read(nr_running) <= 1 ||
-		 (pool->flags & POOL_HIGHPRI_PENDING));
+	return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
 }
 
 /* Do we need a new worker?  Called from manager. */
@@ -892,43 +895,6 @@ static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 }
 
 /**
- * pool_determine_ins_pos - find insertion position
- * @pool: pool of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @pool, determine insertion
- * position for the work.  If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue.  This
- * function also sets POOL_HIGHPRI_PENDING flag to hint @pool that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *pool_determine_ins_pos(struct worker_pool *pool,
-					       struct cpu_workqueue_struct *cwq)
-{
-	struct work_struct *twork;
-
-	if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
-		return &pool->worklist;
-
-	list_for_each_entry(twork, &pool->worklist, entry) {
-		struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
-		if (!(tcwq->wq->flags & WQ_HIGHPRI))
-			break;
-	}
-
-	pool->flags |= POOL_HIGHPRI_PENDING;
-	return &twork->entry;
-}
-
-/**
  * insert_work - insert a work into gcwq
  * @cwq: cwq @work belongs to
  * @work: work to insert
@@ -1068,7 +1034,7 @@ static void __queue_work(unsigned int cpu, struct workqueue_struct *wq,
 	if (likely(cwq->nr_active < cwq->max_active)) {
 		trace_workqueue_activate_work(work);
 		cwq->nr_active++;
-		worklist = pool_determine_ins_pos(cwq->pool, cwq);
+		worklist = &cwq->pool->worklist;
 	} else {
 		work_flags |= WORK_STRUCT_DELAYED;
 		worklist = &cwq->delayed_works;
@@ -1385,6 +1351,7 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 {
 	struct global_cwq *gcwq = pool->gcwq;
 	bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+	const char *pri = worker_pool_pri(pool) ? "H" : "";
 	struct worker *worker = NULL;
 	int id = -1;
 
@@ -1406,15 +1373,17 @@ static struct worker *create_worker(struct worker_pool *pool, bool bind)
 
 	if (!on_unbound_cpu)
 		worker->task = kthread_create_on_node(worker_thread,
-						      worker,
-						      cpu_to_node(gcwq->cpu),
-						      "kworker/%u:%d", gcwq->cpu, id);
+					worker, cpu_to_node(gcwq->cpu),
+					"kworker/%u:%d%s", gcwq->cpu, id, pri);
 	else
 		worker->task = kthread_create(worker_thread, worker,
-					      "kworker/u:%d", id);
+					      "kworker/u:%d%s", id, pri);
 	if (IS_ERR(worker->task))
 		goto fail;
 
+	if (worker_pool_pri(pool))
+		set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
 	/*
 	 * A rogue worker will become a regular one if CPU comes
 	 * online later on.  Make sure every worker has
@@ -1761,10 +1730,9 @@ static void cwq_activate_first_delayed(struct cpu_workqueue_struct *cwq)
 {
 	struct work_struct *work = list_first_entry(&cwq->delayed_works,
 						    struct work_struct, entry);
-	struct list_head *pos = pool_determine_ins_pos(cwq->pool, cwq);
 
 	trace_workqueue_activate_work(work);
-	move_linked_works(work, pos, NULL);
+	move_linked_works(work, &cwq->pool->worklist, NULL);
 	__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
 	cwq->nr_active++;
 }
@@ -1880,21 +1848,6 @@ __acquires(&gcwq->lock)
 	list_del_init(&work->entry);
 
 	/*
-	 * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
-	 * wake up another worker; otherwise, clear HIGHPRI_PENDING.
-	 */
-	if (unlikely(pool->flags & POOL_HIGHPRI_PENDING)) {
-		struct work_struct *nwork = list_first_entry(&pool->worklist,
-					 struct work_struct, entry);
-
-		if (!list_empty(&pool->worklist) &&
-		    get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
-			wake_up_worker(pool);
-		else
-			pool->flags &= ~POOL_HIGHPRI_PENDING;
-	}
-
-	/*
 	 * CPU intensive works don't participate in concurrency
 	 * management.  They're the scheduler's responsibility.
 	 */
@@ -3047,9 +3000,10 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 	for_each_cwq_cpu(cpu, wq) {
 		struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
 		struct global_cwq *gcwq = get_gcwq(cpu);
+		int pool_idx = (bool)(flags & WQ_HIGHPRI);
 
 		BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
-		cwq->pool = &gcwq->pool;
+		cwq->pool = &gcwq->pools[pool_idx];
 		cwq->wq = wq;
 		cwq->flush_color = -1;
 		cwq->max_active = max_active;
-- 
1.7.7.3

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 96+ messages in thread

* Re: [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
  2012-07-14  3:56     ` Tejun Heo
  (?)
@ 2012-07-14  8:18       ` Fengguang Wu
  -1 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-14  8:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

> v2: nr_running indexing bug in get_pool_nr_running() fixed.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Reported-by: Josh Hunt <joshhunt00@gmail.com>
> LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: Fengguang Wu <fengguang.wu@intel.com>
> ---
> git branch updated accordingly.  Thanks.

It works now, thank you very much!

Tested-by: Fengguang Wu <fengguang.wu@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  8:18       ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-14  8:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: linux-kernel, torvalds, joshhunt00, axboe, rni, vgoyal, vwadekar,
	herbert, davem, linux-crypto, swhiteho, bpm, elder, xfs, marcel,
	gustavo, johan.hedberg, linux-bluetooth, martin.petersen,
	Tony Luck

> v2: nr_running indexing bug in get_pool_nr_running() fixed.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Reported-by: Josh Hunt <joshhunt00@gmail.com>
> LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: Fengguang Wu <fengguang.wu@intel.com>
> ---
> git branch updated accordingly.  Thanks.

It works now, thank you very much!

Tested-by: Fengguang Wu <fengguang.wu@intel.com>


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH UPDATED 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool
@ 2012-07-14  8:18       ` Fengguang Wu
  0 siblings, 0 replies; 96+ messages in thread
From: Fengguang Wu @ 2012-07-14  8:18 UTC (permalink / raw)
  To: Tejun Heo
  Cc: axboe, elder, rni, martin.petersen, linux-bluetooth, torvalds,
	marcel, linux-kernel, vwadekar, swhiteho, herbert, bpm,
	Tony Luck, linux-crypto, gustavo, xfs, joshhunt00, davem, vgoyal,
	johan.hedberg

> v2: nr_running indexing bug in get_pool_nr_running() fixed.
> 
> Signed-off-by: Tejun Heo <tj@kernel.org>
> Reported-by: Josh Hunt <joshhunt00@gmail.com>
> LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: Fengguang Wu <fengguang.wu@intel.com>
> ---
> git branch updated accordingly.  Thanks.

It works now, thank you very much!

Tested-by: Fengguang Wu <fengguang.wu@intel.com>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool()
  2012-07-14  5:00             ` Linus Torvalds
                               ` (2 preceding siblings ...)
  (?)
@ 2012-07-16 19:31             ` Peter Seebach
  -1 siblings, 0 replies; 96+ messages in thread
From: Peter Seebach @ 2012-07-16 19:31 UTC (permalink / raw)
  To: linux-kernel

On Fri, 13 Jul 2012 22:00:10 -0700
Linus Torvalds <torvalds@linux-foundation.org> wrote:
> (*) Technically, "&(x)[0]" is actually a really confused way of saying
> "(x+0)" while making sure that "x" was a valid pointer.

But wait, there's more!

Should someone some day try to use an implementation with a fairly
ferocious bounds-checker, the bounds of &x[0] are the bounds of the
first member of x, while the bounds of x are... well, whatever they
were. (If x is an array, they're definitely the bounds of the whole
array. If x is a pointer to something, then it depends on how the
pointer was obtained.)

I'm not sure anyone actually has an implementation that bothers with
this level of granularity in pointers, but I am about 90% sure that an
implementation which did would be conforming.  e.g.:

  int a[2];
  a[1] = 3; /* ok */
  int *b = a;
  b[1] = 3; /* ok */
  int *c = &a[0];
  c[1] = 3; /* bounds violation */

Note that "conforming" does not imply "could compile and run most
existing code without surprising new errors". The world is full of code
which assumes absolute identity between (a+i) and &(*(a+i)).

If the code which inspired your rant was actually doing it on purpose
to obtain this result, I shall have to buy a hat so I can eat it.
(Disclaimer: Hat must be made of something delicious.)

-s
-- 
Listen, get this.  Nobody with a good compiler needs to be justified.

^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2012-07-16 19:31 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-09 18:41 [PATCHSET] workqueue: reimplement high priority using a separate worker pool Tejun Heo
2012-07-09 18:41 ` Tejun Heo
     [not found] ` <1341859315-17759-1-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-07-09 18:41   ` [PATCH 1/6] workqueue: don't use WQ_HIGHPRI for unbound workqueues Tejun Heo
2012-07-09 18:41     ` Tejun Heo
2012-07-09 18:41     ` Tejun Heo
2012-07-09 18:41 ` [PATCH 2/6] workqueue: factor out worker_pool from global_cwq Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-10  4:48   ` Namhyung Kim
2012-07-10  4:48     ` Namhyung Kim
2012-07-10  4:48     ` Namhyung Kim
2012-07-12 17:07     ` Tejun Heo
2012-07-12 17:07       ` Tejun Heo
2012-07-12 17:07       ` Tejun Heo
     [not found]   ` <1341859315-17759-3-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-07-12 21:49     ` [PATCH UPDATED " Tejun Heo
2012-07-12 21:49       ` Tejun Heo
2012-07-12 21:49       ` Tejun Heo
2012-07-09 18:41 ` [PATCH 3/6] workqueue: use @pool instead of @gcwq or @cpu where applicable Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41   ` Tejun Heo
     [not found]   ` <1341859315-17759-4-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-07-10 23:30     ` Tony Luck
2012-07-10 23:30       ` Tony Luck
2012-07-10 23:30       ` Tony Luck
2012-07-12 17:06       ` Tejun Heo
2012-07-12 17:06         ` Tejun Heo
2012-07-12 17:06         ` Tejun Heo
2012-07-09 18:41 ` [PATCH 4/6] workqueue: separate out worker_pool flags Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41 ` [PATCH 5/6] workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool() Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-14  3:55   ` Tejun Heo
2012-07-14  3:55     ` Tejun Heo
2012-07-14  3:55     ` Tejun Heo
2012-07-14  4:27     ` Linus Torvalds
2012-07-14  4:27       ` Linus Torvalds
2012-07-14  4:27       ` Linus Torvalds
     [not found]       ` <CA+55aFyeauqCqrWsx4U2TB2ENrugZXYj+4vw3Fd0kGaeWBP3RA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2012-07-14  4:44         ` Tejun Heo
2012-07-14  4:44           ` Tejun Heo
2012-07-14  4:44           ` Tejun Heo
2012-07-14  5:00           ` Linus Torvalds
2012-07-14  5:00             ` Linus Torvalds
2012-07-14  5:00             ` Linus Torvalds
2012-07-14  5:07             ` Tejun Heo
2012-07-14  5:07               ` Tejun Heo
2012-07-14  5:07               ` Tejun Heo
2012-07-16 19:31             ` Peter Seebach
     [not found]   ` <1341859315-17759-6-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-07-14  5:21     ` [PATCH UPDATED " Tejun Heo
2012-07-14  5:21       ` Tejun Heo
2012-07-14  5:21       ` Tejun Heo
2012-07-09 18:41 ` [PATCH 6/6] workqueue: reimplement WQ_HIGHPRI using a separate worker_pool Tejun Heo
2012-07-09 18:41   ` Tejun Heo
2012-07-09 18:41   ` Tejun Heo
     [not found]   ` <1341859315-17759-7-git-send-email-tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
2012-07-12 13:06     ` Fengguang Wu
2012-07-12 13:06       ` Fengguang Wu
2012-07-12 13:06       ` Fengguang Wu
2012-07-12 17:05       ` Tejun Heo
2012-07-12 17:05         ` Tejun Heo
2012-07-12 17:05         ` Tejun Heo
2012-07-12 21:45         ` Tejun Heo
2012-07-12 21:45           ` Tejun Heo
2012-07-12 21:45           ` Tejun Heo
2012-07-12 22:16           ` Tony Luck
2012-07-12 22:16             ` Tony Luck
2012-07-12 22:16             ` Tony Luck
2012-07-12 22:32             ` Tejun Heo
2012-07-12 22:32               ` Tejun Heo
2012-07-12 22:32               ` Tejun Heo
2012-07-12 23:24               ` Tony Luck
2012-07-12 23:24                 ` Tony Luck
2012-07-12 23:24                 ` Tony Luck
2012-07-12 23:36                 ` Tejun Heo
2012-07-12 23:36                   ` Tejun Heo
2012-07-12 23:36                   ` Tejun Heo
2012-07-12 23:46                   ` Tony Luck
2012-07-12 23:46                     ` Tony Luck
2012-07-12 23:46                     ` Tony Luck
2012-07-13 17:51                     ` Tony Luck
2012-07-13 17:51                       ` Tony Luck
2012-07-13 17:51                       ` Tony Luck
2012-07-13  2:08           ` Fengguang Wu
2012-07-13  2:08             ` Fengguang Wu
2012-07-13  2:08             ` Fengguang Wu
2012-07-14  3:41             ` Tejun Heo
2012-07-14  3:41               ` Tejun Heo
2012-07-14  3:41               ` Tejun Heo
2012-07-14  3:56   ` [PATCH UPDATED " Tejun Heo
2012-07-14  3:56     ` Tejun Heo
2012-07-14  3:56     ` Tejun Heo
2012-07-14  8:18     ` Fengguang Wu
2012-07-14  8:18       ` Fengguang Wu
2012-07-14  8:18       ` Fengguang Wu
2012-07-14  5:24   ` [PATCH UPDATED v3 " Tejun Heo
2012-07-14  5:24     ` Tejun Heo
2012-07-14  5:24     ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.