* [PATCH 0/6] Lazy workqueues
@ 2009-08-20 10:19 Jens Axboe
2009-08-20 10:19 ` [PATCH 1/6] workqueue: replace singlethread/freezable/rt parameters and variables with flags Jens Axboe
` (8 more replies)
0 siblings, 9 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:19 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan
(sorry for the resend, but apparently the directory had some patches
in it already. plus, stupid git send-email doesn't default to
no chain replies, really annoying)
Hi,
After yesterdays rant on having too many kernel threads and checking
how many I actually have running on this system (531!), I decided to
try and do something about it.
My goal was to retain the workqueue interface instead of coming up with
a new scheme that required conversion (or converting to slow_work which,
btw, is an awful name :-). I also wanted to retain the affinity
guarantees of workqueues as much as possible.
So this is a first step in that direction, it's probably full of races
and holes, but should get the idea across. It adds a
create_lazy_workqueue() helper, similar to the other variants that we
currently have. A lazy workqueue works like a normal workqueue, except
that it only (by default) starts a core thread instead of threads for
all online CPUs. When work is queued on a lazy workqueue for a CPU
that doesn't have a thread running, it will be placed on the core CPUs
list and that will then create and move the work to the right target.
Should task creation fail, the queued work will be executed on the
core CPU instead. Once a lazy workqueue thread has been idle for a
certain amount of time, it will again exit.
The patch boots here and I exercised the rpciod workqueue and
verified that it gets created, runs on the right CPU, and exits a while
later. So core functionality should be there, even if it has holes.
With this patchset, I am now down to 280 kernel threads on one of my test
boxes. Still too many, but it's a start and a net reduction of 251
threads here, or 47%!
The code can also be pulled from:
git://git.kernel.dk/linux-2.6-block.git workqueue
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 1/6] workqueue: replace singlethread/freezable/rt parameters and variables with flags
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
@ 2009-08-20 10:19 ` Jens Axboe
2009-08-20 10:20 ` [PATCH 2/6] workqueue: add support for lazy workqueues Jens Axboe
` (7 subsequent siblings)
8 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:19 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Collapse the three ints into a flags variable, in preparation for
adding another flag.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
include/linux/workqueue.h | 32 ++++++++++++++++++--------------
kernel/workqueue.c | 22 ++++++++--------------
2 files changed, 26 insertions(+), 28 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 13e1adf..f14e20e 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -165,12 +165,17 @@ struct execute_work {
extern struct workqueue_struct *
-__create_workqueue_key(const char *name, int singlethread,
- int freezeable, int rt, struct lock_class_key *key,
- const char *lock_name);
+__create_workqueue_key(const char *name, unsigned int flags,
+ struct lock_class_key *key, const char *lock_name);
+
+enum {
+ WQ_F_SINGLETHREAD = 1,
+ WQ_F_FREEZABLE = 2,
+ WQ_F_RT = 4,
+};
#ifdef CONFIG_LOCKDEP
-#define __create_workqueue(name, singlethread, freezeable, rt) \
+#define __create_workqueue(name, flags) \
({ \
static struct lock_class_key __key; \
const char *__lock_name; \
@@ -180,20 +185,19 @@ __create_workqueue_key(const char *name, int singlethread,
else \
__lock_name = #name; \
\
- __create_workqueue_key((name), (singlethread), \
- (freezeable), (rt), &__key, \
- __lock_name); \
+ __create_workqueue_key((name), (flags), &__key, __lockname); \
})
#else
-#define __create_workqueue(name, singlethread, freezeable, rt) \
- __create_workqueue_key((name), (singlethread), (freezeable), (rt), \
- NULL, NULL)
+#define __create_workqueue(name, flags) \
+ __create_workqueue_key((name), (flags), NULL, NULL)
#endif
-#define create_workqueue(name) __create_workqueue((name), 0, 0, 0)
-#define create_rt_workqueue(name) __create_workqueue((name), 0, 0, 1)
-#define create_freezeable_workqueue(name) __create_workqueue((name), 1, 1, 0)
-#define create_singlethread_workqueue(name) __create_workqueue((name), 1, 0, 0)
+#define create_workqueue(name) __create_workqueue((name), 0)
+#define create_rt_workqueue(name) __create_workqueue((name), WQ_F_RT)
+#define create_freezeable_workqueue(name) \
+ __create_workqueue((name), WQ_F_SINGLETHREAD | WQ_F_FREEZABLE)
+#define create_singlethread_workqueue(name) \
+ __create_workqueue((name), WQ_F_SINGLETHREAD)
extern void destroy_workqueue(struct workqueue_struct *wq);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0668795..02ba7c9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -60,9 +60,7 @@ struct workqueue_struct {
struct cpu_workqueue_struct *cpu_wq;
struct list_head list;
const char *name;
- int singlethread;
- int freezeable; /* Freeze threads during suspend */
- int rt;
+ unsigned int flags; /* WQ_F_* flags */
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
@@ -84,9 +82,9 @@ static const struct cpumask *cpu_singlethread_map __read_mostly;
static cpumask_var_t cpu_populated_map __read_mostly;
/* If it's single threaded, it isn't in the list of workqueues. */
-static inline int is_wq_single_threaded(struct workqueue_struct *wq)
+static inline bool is_wq_single_threaded(struct workqueue_struct *wq)
{
- return wq->singlethread;
+ return wq->flags & WQ_F_SINGLETHREAD;
}
static const struct cpumask *wq_cpu_map(struct workqueue_struct *wq)
@@ -314,7 +312,7 @@ static int worker_thread(void *__cwq)
struct cpu_workqueue_struct *cwq = __cwq;
DEFINE_WAIT(wait);
- if (cwq->wq->freezeable)
+ if (cwq->wq->flags & WQ_F_FREEZABLE)
set_freezable();
set_user_nice(current, -5);
@@ -768,7 +766,7 @@ static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
*/
if (IS_ERR(p))
return PTR_ERR(p);
- if (cwq->wq->rt)
+ if (cwq->wq->flags & WQ_F_RT)
sched_setscheduler_nocheck(p, SCHED_FIFO, ¶m);
cwq->thread = p;
@@ -789,9 +787,7 @@ static void start_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu)
}
struct workqueue_struct *__create_workqueue_key(const char *name,
- int singlethread,
- int freezeable,
- int rt,
+ unsigned int flags,
struct lock_class_key *key,
const char *lock_name)
{
@@ -811,12 +807,10 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
wq->name = name;
lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
- wq->singlethread = singlethread;
- wq->freezeable = freezeable;
- wq->rt = rt;
+ wq->flags = flags;
INIT_LIST_HEAD(&wq->list);
- if (singlethread) {
+ if (flags & WQ_F_SINGLETHREAD) {
cwq = init_cpu_workqueue(wq, singlethread_cpu);
err = create_workqueue_thread(cwq, singlethread_cpu);
start_workqueue_thread(cwq, -1);
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 2/6] workqueue: add support for lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
2009-08-20 10:19 ` [PATCH 1/6] workqueue: replace singlethread/freezable/rt parameters and variables with flags Jens Axboe
@ 2009-08-20 10:20 ` Jens Axboe
2009-08-20 12:01 ` Frederic Weisbecker
2009-08-20 10:20 ` [PATCH 3/6] crypto: use " Jens Axboe
` (6 subsequent siblings)
8 siblings, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:20 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Lazy workqueues are like normal workqueues, except they don't
start a thread per CPU by default. Instead threads are started
when they are needed, and exit when they have been idle for
some time.
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
include/linux/workqueue.h | 5 ++
kernel/workqueue.c | 152 ++++++++++++++++++++++++++++++++++++++++++---
2 files changed, 147 insertions(+), 10 deletions(-)
diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index f14e20e..b2dd267 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -32,6 +32,7 @@ struct work_struct {
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
+ unsigned int cpu;
};
#define WORK_DATA_INIT() ATOMIC_LONG_INIT(0)
@@ -172,6 +173,7 @@ enum {
WQ_F_SINGLETHREAD = 1,
WQ_F_FREEZABLE = 2,
WQ_F_RT = 4,
+ WQ_F_LAZY = 8,
};
#ifdef CONFIG_LOCKDEP
@@ -198,6 +200,7 @@ enum {
__create_workqueue((name), WQ_F_SINGLETHREAD | WQ_F_FREEZABLE)
#define create_singlethread_workqueue(name) \
__create_workqueue((name), WQ_F_SINGLETHREAD)
+#define create_lazy_workqueue(name) __create_workqueue((name), WQ_F_LAZY)
extern void destroy_workqueue(struct workqueue_struct *wq);
@@ -211,6 +214,8 @@ extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
extern void flush_workqueue(struct workqueue_struct *wq);
extern void flush_scheduled_work(void);
+extern void workqueue_set_lazy_timeout(struct workqueue_struct *wq,
+ unsigned long timeout);
extern int schedule_work(struct work_struct *work);
extern int schedule_work_on(int cpu, struct work_struct *work);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 02ba7c9..d9ccebc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -61,11 +61,17 @@ struct workqueue_struct {
struct list_head list;
const char *name;
unsigned int flags; /* WQ_F_* flags */
+ unsigned long lazy_timeout;
+ unsigned int core_cpu;
#ifdef CONFIG_LOCKDEP
struct lockdep_map lockdep_map;
#endif
};
+/* Default lazy workqueue timeout */
+#define WQ_DEF_LAZY_TIMEOUT (60 * HZ)
+
+
/* Serializes the accesses to the list of workqueues. */
static DEFINE_SPINLOCK(workqueue_lock);
static LIST_HEAD(workqueues);
@@ -81,6 +87,8 @@ static const struct cpumask *cpu_singlethread_map __read_mostly;
*/
static cpumask_var_t cpu_populated_map __read_mostly;
+static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu);
+
/* If it's single threaded, it isn't in the list of workqueues. */
static inline bool is_wq_single_threaded(struct workqueue_struct *wq)
{
@@ -141,11 +149,29 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
static void __queue_work(struct cpu_workqueue_struct *cwq,
struct work_struct *work)
{
+ struct workqueue_struct *wq = cwq->wq;
unsigned long flags;
- spin_lock_irqsave(&cwq->lock, flags);
- insert_work(cwq, work, &cwq->worklist);
- spin_unlock_irqrestore(&cwq->lock, flags);
+ /*
+ * This is a lazy workqueue and this particular CPU thread has
+ * exited. We can't create it from here, so add this work on our
+ * static thread. It will create this thread and move the work there.
+ */
+ if ((wq->flags & WQ_F_LAZY) && !cwq->thread) {
+ struct cpu_workqueue_struct *__cwq;
+
+ local_irq_save(flags);
+ __cwq = wq_per_cpu(wq, wq->core_cpu);
+ work->cpu = smp_processor_id();
+ spin_lock(&__cwq->lock);
+ insert_work(__cwq, work, &__cwq->worklist);
+ spin_unlock_irqrestore(&__cwq->lock, flags);
+ } else {
+ spin_lock_irqsave(&cwq->lock, flags);
+ work->cpu = smp_processor_id();
+ insert_work(cwq, work, &cwq->worklist);
+ spin_unlock_irqrestore(&cwq->lock, flags);
+ }
}
/**
@@ -259,13 +285,16 @@ int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
}
EXPORT_SYMBOL_GPL(queue_delayed_work_on);
-static void run_workqueue(struct cpu_workqueue_struct *cwq)
+static int run_workqueue(struct cpu_workqueue_struct *cwq)
{
+ int did_work = 0;
+
spin_lock_irq(&cwq->lock);
while (!list_empty(&cwq->worklist)) {
struct work_struct *work = list_entry(cwq->worklist.next,
struct work_struct, entry);
work_func_t f = work->func;
+ int cpu;
#ifdef CONFIG_LOCKDEP
/*
* It is permissible to free the struct work_struct
@@ -280,7 +309,34 @@ static void run_workqueue(struct cpu_workqueue_struct *cwq)
trace_workqueue_execution(cwq->thread, work);
cwq->current_work = work;
list_del_init(cwq->worklist.next);
+ cpu = smp_processor_id();
spin_unlock_irq(&cwq->lock);
+ did_work = 1;
+
+ /*
+ * If work->cpu isn't us, then we need to create the target
+ * workqueue thread (if someone didn't already do that) and
+ * move the work over there.
+ */
+ if ((cwq->wq->flags & WQ_F_LAZY) && work->cpu != cpu) {
+ struct cpu_workqueue_struct *__cwq;
+ struct task_struct *p;
+ int err;
+
+ __cwq = wq_per_cpu(cwq->wq, work->cpu);
+ p = __cwq->thread;
+ if (!p)
+ err = create_workqueue_thread(__cwq, work->cpu);
+ p = __cwq->thread;
+ if (p) {
+ if (work->cpu >= 0)
+ kthread_bind(p, work->cpu);
+ insert_work(__cwq, work, &__cwq->worklist);
+ wake_up_process(p);
+ goto out;
+ }
+ }
+
BUG_ON(get_wq_data(work) != cwq);
work_clear_pending(work);
@@ -305,24 +361,45 @@ static void run_workqueue(struct cpu_workqueue_struct *cwq)
cwq->current_work = NULL;
}
spin_unlock_irq(&cwq->lock);
+out:
+ return did_work;
}
static int worker_thread(void *__cwq)
{
struct cpu_workqueue_struct *cwq = __cwq;
+ struct workqueue_struct *wq = cwq->wq;
+ unsigned long last_active = jiffies;
DEFINE_WAIT(wait);
+ int may_exit;
- if (cwq->wq->flags & WQ_F_FREEZABLE)
+ if (wq->flags & WQ_F_FREEZABLE)
set_freezable();
set_user_nice(current, -5);
+ /*
+ * Allow exit if this isn't our core thread
+ */
+ if ((wq->flags & WQ_F_LAZY) && smp_processor_id() != wq->core_cpu)
+ may_exit = 1;
+ else
+ may_exit = 0;
+
for (;;) {
+ int did_work;
+
prepare_to_wait(&cwq->more_work, &wait, TASK_INTERRUPTIBLE);
if (!freezing(current) &&
!kthread_should_stop() &&
- list_empty(&cwq->worklist))
- schedule();
+ list_empty(&cwq->worklist)) {
+ unsigned long timeout = wq->lazy_timeout;
+
+ if (timeout && may_exit)
+ schedule_timeout(timeout);
+ else
+ schedule();
+ }
finish_wait(&cwq->more_work, &wait);
try_to_freeze();
@@ -330,7 +407,19 @@ static int worker_thread(void *__cwq)
if (kthread_should_stop())
break;
- run_workqueue(cwq);
+ did_work = run_workqueue(cwq);
+
+ /*
+ * If we did no work for the defined timeout period and we are
+ * allowed to exit, do so.
+ */
+ if (did_work)
+ last_active = jiffies;
+ else if (time_after(jiffies, last_active + wq->lazy_timeout) &&
+ may_exit) {
+ cwq->thread = NULL;
+ break;
+ }
}
return 0;
@@ -814,7 +903,10 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
cwq = init_cpu_workqueue(wq, singlethread_cpu);
err = create_workqueue_thread(cwq, singlethread_cpu);
start_workqueue_thread(cwq, -1);
+ wq->core_cpu = singlethread_cpu;
} else {
+ int created = 0;
+
cpu_maps_update_begin();
/*
* We must place this wq on list even if the code below fails.
@@ -833,10 +925,16 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
*/
for_each_possible_cpu(cpu) {
cwq = init_cpu_workqueue(wq, cpu);
- if (err || !cpu_online(cpu))
+ if (err || !cpu_online(cpu) ||
+ (created && (wq->flags & WQ_F_LAZY)))
continue;
err = create_workqueue_thread(cwq, cpu);
start_workqueue_thread(cwq, cpu);
+ if (!err) {
+ if (!created)
+ wq->core_cpu = cpu;
+ created++;
+ }
}
cpu_maps_update_done();
}
@@ -844,7 +942,9 @@ struct workqueue_struct *__create_workqueue_key(const char *name,
if (err) {
destroy_workqueue(wq);
wq = NULL;
- }
+ } else if (wq->flags & WQ_F_LAZY)
+ workqueue_set_lazy_timeout(wq, WQ_DEF_LAZY_TIMEOUT);
+
return wq;
}
EXPORT_SYMBOL_GPL(__create_workqueue_key);
@@ -877,6 +977,13 @@ static void cleanup_workqueue_thread(struct cpu_workqueue_struct *cwq)
cwq->thread = NULL;
}
+static bool hotplug_should_start_thread(struct workqueue_struct *wq, int cpu)
+{
+ if ((wq->flags & WQ_F_LAZY) && cpu != wq->core_cpu)
+ return 0;
+ return 1;
+}
+
/**
* destroy_workqueue - safely terminate a workqueue
* @wq: target workqueue
@@ -923,6 +1030,8 @@ undo:
switch (action) {
case CPU_UP_PREPARE:
+ if (!hotplug_should_start_thread(wq, cpu))
+ break;
if (!create_workqueue_thread(cwq, cpu))
break;
printk(KERN_ERR "workqueue [%s] for %i failed\n",
@@ -932,6 +1041,8 @@ undo:
goto undo;
case CPU_ONLINE:
+ if (!hotplug_should_start_thread(wq, cpu))
+ break;
start_workqueue_thread(cwq, cpu);
break;
@@ -999,6 +1110,27 @@ long work_on_cpu(unsigned int cpu, long (*fn)(void *), void *arg)
EXPORT_SYMBOL_GPL(work_on_cpu);
#endif /* CONFIG_SMP */
+/**
+ * workqueue_set_lazy_timeout - set lazy exit timeout
+ * @wq: the associated workqueue_struct
+ * @timeout: timeout in jiffies
+ *
+ * This will set the timeout for a lazy workqueue. If no work has been
+ * processed for @timeout jiffies, then the workqueue is allowed to exit.
+ * It will be dynamically created again when work is queued to it.
+ *
+ * Note that this only works for workqueues created with
+ * create_lazy_workqueue().
+ */
+void workqueue_set_lazy_timeout(struct workqueue_struct *wq,
+ unsigned long timeout)
+{
+ if (WARN_ON(!(wq->flags & WQ_F_LAZY)))
+ return;
+
+ wq->lazy_timeout = timeout;
+}
+
void __init init_workqueues(void)
{
alloc_cpumask_var(&cpu_populated_map, GFP_KERNEL);
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 3/6] crypto: use lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
2009-08-20 10:19 ` [PATCH 1/6] workqueue: replace singlethread/freezable/rt parameters and variables with flags Jens Axboe
2009-08-20 10:20 ` [PATCH 2/6] workqueue: add support for lazy workqueues Jens Axboe
@ 2009-08-20 10:20 ` Jens Axboe
2009-08-20 10:20 ` [PATCH 4/6] libata: use lazy workqueues for the pio task Jens Axboe
` (5 subsequent siblings)
8 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:20 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
crypto/crypto_wq.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/crypto/crypto_wq.c b/crypto/crypto_wq.c
index fdcf624..88cccee 100644
--- a/crypto/crypto_wq.c
+++ b/crypto/crypto_wq.c
@@ -20,7 +20,7 @@ EXPORT_SYMBOL_GPL(kcrypto_wq);
static int __init crypto_wq_init(void)
{
- kcrypto_wq = create_workqueue("crypto");
+ kcrypto_wq = create_lazy_workqueue("crypto");
if (unlikely(!kcrypto_wq))
return -ENOMEM;
return 0;
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 4/6] libata: use lazy workqueues for the pio task
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (2 preceding siblings ...)
2009-08-20 10:20 ` [PATCH 3/6] crypto: use " Jens Axboe
@ 2009-08-20 10:20 ` Jens Axboe
2009-08-20 12:40 ` Stefan Richter
2009-08-20 10:20 ` [PATCH 5/6] aio: use lazy workqueues Jens Axboe
` (4 subsequent siblings)
8 siblings, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:20 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
drivers/ata/libata-core.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index 072ba5e..35f74c9 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -6580,7 +6580,7 @@ static int __init ata_init(void)
{
ata_parse_force_param();
- ata_wq = create_workqueue("ata");
+ ata_wq = create_lazy_workqueue("ata");
if (!ata_wq)
goto free_force_tbl;
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 5/6] aio: use lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (3 preceding siblings ...)
2009-08-20 10:20 ` [PATCH 4/6] libata: use lazy workqueues for the pio task Jens Axboe
@ 2009-08-20 10:20 ` Jens Axboe
2009-08-20 15:09 ` Jeff Moyer
2009-08-20 10:20 ` [PATCH 6/6] sunrpc: " Jens Axboe
` (3 subsequent siblings)
8 siblings, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:20 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
fs/aio.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/aio.c b/fs/aio.c
index d065b2c..4103b59 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -72,7 +72,7 @@ static int __init aio_setup(void)
kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC);
kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC);
- aio_wq = create_workqueue("aio");
+ aio_wq = create_lazy_workqueue("aio");
pr_debug("aio_setup: sizeof(struct page) = %d\n", (int)sizeof(struct page));
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* [PATCH 6/6] sunrpc: use lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (4 preceding siblings ...)
2009-08-20 10:20 ` [PATCH 5/6] aio: use lazy workqueues Jens Axboe
@ 2009-08-20 10:20 ` Jens Axboe
2009-08-20 12:04 ` [PATCH 0/6] Lazy workqueues Peter Zijlstra
` (2 subsequent siblings)
8 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:20 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan, Jens Axboe
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
---
net/sunrpc/sched.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
index 8f459ab..ce99fe2 100644
--- a/net/sunrpc/sched.c
+++ b/net/sunrpc/sched.c
@@ -970,7 +970,7 @@ static int rpciod_start(void)
* Create the rpciod thread and wait for it to start.
*/
dprintk("RPC: creating workqueue rpciod\n");
- wq = create_workqueue("rpciod");
+ wq = create_lazy_workqueue("rpciod");
rpciod_workqueue = wq;
return rpciod_workqueue != NULL;
}
--
1.6.4.173.g3f189
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [PATCH 2/6] workqueue: add support for lazy workqueues
2009-08-20 10:20 ` [PATCH 2/6] workqueue: add support for lazy workqueues Jens Axboe
@ 2009-08-20 12:01 ` Frederic Weisbecker
2009-08-20 12:10 ` Jens Axboe
0 siblings, 1 reply; 28+ messages in thread
From: Frederic Weisbecker @ 2009-08-20 12:01 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
On Thu, Aug 20, 2009 at 12:20:00PM +0200, Jens Axboe wrote:
> Lazy workqueues are like normal workqueues, except they don't
> start a thread per CPU by default. Instead threads are started
> when they are needed, and exit when they have been idle for
> some time.
>
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
> include/linux/workqueue.h | 5 ++
> kernel/workqueue.c | 152 ++++++++++++++++++++++++++++++++++++++++++---
> 2 files changed, 147 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index f14e20e..b2dd267 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -32,6 +32,7 @@ struct work_struct {
> #ifdef CONFIG_LOCKDEP
> struct lockdep_map lockdep_map;
> #endif
> + unsigned int cpu;
> };
>
> #define WORK_DATA_INIT() ATOMIC_LONG_INIT(0)
> @@ -172,6 +173,7 @@ enum {
> WQ_F_SINGLETHREAD = 1,
> WQ_F_FREEZABLE = 2,
> WQ_F_RT = 4,
> + WQ_F_LAZY = 8,
> };
>
> #ifdef CONFIG_LOCKDEP
> @@ -198,6 +200,7 @@ enum {
> __create_workqueue((name), WQ_F_SINGLETHREAD | WQ_F_FREEZABLE)
> #define create_singlethread_workqueue(name) \
> __create_workqueue((name), WQ_F_SINGLETHREAD)
> +#define create_lazy_workqueue(name) __create_workqueue((name), WQ_F_LAZY)
>
> extern void destroy_workqueue(struct workqueue_struct *wq);
>
> @@ -211,6 +214,8 @@ extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
>
> extern void flush_workqueue(struct workqueue_struct *wq);
> extern void flush_scheduled_work(void);
> +extern void workqueue_set_lazy_timeout(struct workqueue_struct *wq,
> + unsigned long timeout);
>
> extern int schedule_work(struct work_struct *work);
> extern int schedule_work_on(int cpu, struct work_struct *work);
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 02ba7c9..d9ccebc 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -61,11 +61,17 @@ struct workqueue_struct {
> struct list_head list;
> const char *name;
> unsigned int flags; /* WQ_F_* flags */
> + unsigned long lazy_timeout;
> + unsigned int core_cpu;
> #ifdef CONFIG_LOCKDEP
> struct lockdep_map lockdep_map;
> #endif
> };
>
> +/* Default lazy workqueue timeout */
> +#define WQ_DEF_LAZY_TIMEOUT (60 * HZ)
> +
> +
> /* Serializes the accesses to the list of workqueues. */
> static DEFINE_SPINLOCK(workqueue_lock);
> static LIST_HEAD(workqueues);
> @@ -81,6 +87,8 @@ static const struct cpumask *cpu_singlethread_map __read_mostly;
> */
> static cpumask_var_t cpu_populated_map __read_mostly;
>
> +static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu);
> +
> /* If it's single threaded, it isn't in the list of workqueues. */
> static inline bool is_wq_single_threaded(struct workqueue_struct *wq)
> {
> @@ -141,11 +149,29 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
> static void __queue_work(struct cpu_workqueue_struct *cwq,
> struct work_struct *work)
> {
> + struct workqueue_struct *wq = cwq->wq;
> unsigned long flags;
>
> - spin_lock_irqsave(&cwq->lock, flags);
> - insert_work(cwq, work, &cwq->worklist);
> - spin_unlock_irqrestore(&cwq->lock, flags);
> + /*
> + * This is a lazy workqueue and this particular CPU thread has
> + * exited. We can't create it from here, so add this work on our
> + * static thread. It will create this thread and move the work there.
> + */
> + if ((wq->flags & WQ_F_LAZY) && !cwq->thread) {
Isn't this part racy? If a work has just been queued but the thread
hasn't had yet enough time to start until we get there...?
> + struct cpu_workqueue_struct *__cwq;
> +
> + local_irq_save(flags);
> + __cwq = wq_per_cpu(wq, wq->core_cpu);
> + work->cpu = smp_processor_id();
> + spin_lock(&__cwq->lock);
> + insert_work(__cwq, work, &__cwq->worklist);
> + spin_unlock_irqrestore(&__cwq->lock, flags);
> + } else {
> + spin_lock_irqsave(&cwq->lock, flags);
> + work->cpu = smp_processor_id();
> + insert_work(cwq, work, &cwq->worklist);
> + spin_unlock_irqrestore(&cwq->lock, flags);
> + }
> }
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (5 preceding siblings ...)
2009-08-20 10:20 ` [PATCH 6/6] sunrpc: " Jens Axboe
@ 2009-08-20 12:04 ` Peter Zijlstra
2009-08-20 12:08 ` Jens Axboe
2009-08-20 12:22 ` Frederic Weisbecker
2009-08-20 12:55 ` Tejun Heo
8 siblings, 1 reply; 28+ messages in thread
From: Peter Zijlstra @ 2009-08-20 12:04 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
On Thu, 2009-08-20 at 12:19 +0200, Jens Axboe wrote:
> (sorry for the resend, but apparently the directory had some patches
> in it already. plus, stupid git send-email doesn't default to
> no chain replies, really annoying)
Newer versions should, I made a stink about this some time ago.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:04 ` [PATCH 0/6] Lazy workqueues Peter Zijlstra
@ 2009-08-20 12:08 ` Jens Axboe
2009-08-20 12:16 ` Peter Zijlstra
0 siblings, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 12:08 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
On Thu, Aug 20 2009, Peter Zijlstra wrote:
> On Thu, 2009-08-20 at 12:19 +0200, Jens Axboe wrote:
> > (sorry for the resend, but apparently the directory had some patches
> > in it already. plus, stupid git send-email doesn't default to
> > no chain replies, really annoying)
>
> Newer versions should, I made a stink about this some time ago.
git version 1.6.4.173.g3f189
That's pretty new... But perhaps I should complain too, it's been
annoying me forever.
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 2/6] workqueue: add support for lazy workqueues
2009-08-20 12:01 ` Frederic Weisbecker
@ 2009-08-20 12:10 ` Jens Axboe
0 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 12:10 UTC (permalink / raw)
To: Frederic Weisbecker; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
On Thu, Aug 20 2009, Frederic Weisbecker wrote:
> On Thu, Aug 20, 2009 at 12:20:00PM +0200, Jens Axboe wrote:
> > Lazy workqueues are like normal workqueues, except they don't
> > start a thread per CPU by default. Instead threads are started
> > when they are needed, and exit when they have been idle for
> > some time.
> >
> > Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> > ---
> > include/linux/workqueue.h | 5 ++
> > kernel/workqueue.c | 152 ++++++++++++++++++++++++++++++++++++++++++---
> > 2 files changed, 147 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> > index f14e20e..b2dd267 100644
> > --- a/include/linux/workqueue.h
> > +++ b/include/linux/workqueue.h
> > @@ -32,6 +32,7 @@ struct work_struct {
> > #ifdef CONFIG_LOCKDEP
> > struct lockdep_map lockdep_map;
> > #endif
> > + unsigned int cpu;
> > };
> >
> > #define WORK_DATA_INIT() ATOMIC_LONG_INIT(0)
> > @@ -172,6 +173,7 @@ enum {
> > WQ_F_SINGLETHREAD = 1,
> > WQ_F_FREEZABLE = 2,
> > WQ_F_RT = 4,
> > + WQ_F_LAZY = 8,
> > };
> >
> > #ifdef CONFIG_LOCKDEP
> > @@ -198,6 +200,7 @@ enum {
> > __create_workqueue((name), WQ_F_SINGLETHREAD | WQ_F_FREEZABLE)
> > #define create_singlethread_workqueue(name) \
> > __create_workqueue((name), WQ_F_SINGLETHREAD)
> > +#define create_lazy_workqueue(name) __create_workqueue((name), WQ_F_LAZY)
> >
> > extern void destroy_workqueue(struct workqueue_struct *wq);
> >
> > @@ -211,6 +214,8 @@ extern int queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
> >
> > extern void flush_workqueue(struct workqueue_struct *wq);
> > extern void flush_scheduled_work(void);
> > +extern void workqueue_set_lazy_timeout(struct workqueue_struct *wq,
> > + unsigned long timeout);
> >
> > extern int schedule_work(struct work_struct *work);
> > extern int schedule_work_on(int cpu, struct work_struct *work);
> > diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> > index 02ba7c9..d9ccebc 100644
> > --- a/kernel/workqueue.c
> > +++ b/kernel/workqueue.c
> > @@ -61,11 +61,17 @@ struct workqueue_struct {
> > struct list_head list;
> > const char *name;
> > unsigned int flags; /* WQ_F_* flags */
> > + unsigned long lazy_timeout;
> > + unsigned int core_cpu;
> > #ifdef CONFIG_LOCKDEP
> > struct lockdep_map lockdep_map;
> > #endif
> > };
> >
> > +/* Default lazy workqueue timeout */
> > +#define WQ_DEF_LAZY_TIMEOUT (60 * HZ)
> > +
> > +
> > /* Serializes the accesses to the list of workqueues. */
> > static DEFINE_SPINLOCK(workqueue_lock);
> > static LIST_HEAD(workqueues);
> > @@ -81,6 +87,8 @@ static const struct cpumask *cpu_singlethread_map __read_mostly;
> > */
> > static cpumask_var_t cpu_populated_map __read_mostly;
> >
> > +static int create_workqueue_thread(struct cpu_workqueue_struct *cwq, int cpu);
> > +
> > /* If it's single threaded, it isn't in the list of workqueues. */
> > static inline bool is_wq_single_threaded(struct workqueue_struct *wq)
> > {
> > @@ -141,11 +149,29 @@ static void insert_work(struct cpu_workqueue_struct *cwq,
> > static void __queue_work(struct cpu_workqueue_struct *cwq,
> > struct work_struct *work)
> > {
> > + struct workqueue_struct *wq = cwq->wq;
> > unsigned long flags;
> >
> > - spin_lock_irqsave(&cwq->lock, flags);
> > - insert_work(cwq, work, &cwq->worklist);
> > - spin_unlock_irqrestore(&cwq->lock, flags);
> > + /*
> > + * This is a lazy workqueue and this particular CPU thread has
> > + * exited. We can't create it from here, so add this work on our
> > + * static thread. It will create this thread and move the work there.
> > + */
> > + if ((wq->flags & WQ_F_LAZY) && !cwq->thread) {
>
>
>
> Isn't this part racy? If a work has just been queued but the thread
> hasn't had yet enough time to start until we get there...?
Sure it is, see my initial description about holes and races :-)
Thread re-recreation and such need to ensure that one and only one gets
set up, of course. I just didn't want to spend a lot of time making it
air tight in case people had big complaints that means I have to rewrite
bits of it.
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:08 ` Jens Axboe
@ 2009-08-20 12:16 ` Peter Zijlstra
2009-08-23 2:42 ` Junio C Hamano
0 siblings, 1 reply; 28+ messages in thread
From: Peter Zijlstra @ 2009-08-20 12:16 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan, Junio C Hamano
On Thu, 2009-08-20 at 14:08 +0200, Jens Axboe wrote:
> On Thu, Aug 20 2009, Peter Zijlstra wrote:
> > On Thu, 2009-08-20 at 12:19 +0200, Jens Axboe wrote:
> > > (sorry for the resend, but apparently the directory had some patches
> > > in it already. plus, stupid git send-email doesn't default to
> > > no chain replies, really annoying)
> >
> > Newer versions should, I made a stink about this some time ago.
>
> git version 1.6.4.173.g3f189
>
> That's pretty new... But perhaps I should complain too, it's been
> annoying me forever.
http://marc.info/?l=git&m=123457137328461&w=2
Apparently it didn't happen, nor did I ever see a reply to that posting.
Junio, what happened here?
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (6 preceding siblings ...)
2009-08-20 12:04 ` [PATCH 0/6] Lazy workqueues Peter Zijlstra
@ 2009-08-20 12:22 ` Frederic Weisbecker
2009-08-20 12:41 ` Jens Axboe
2009-08-20 12:59 ` Steven Whitehouse
2009-08-20 12:55 ` Tejun Heo
8 siblings, 2 replies; 28+ messages in thread
From: Frederic Weisbecker @ 2009-08-20 12:22 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan, Andrew Morton,
Oleg Nesterov
On Thu, Aug 20, 2009 at 12:19:58PM +0200, Jens Axboe wrote:
> (sorry for the resend, but apparently the directory had some patches
> in it already. plus, stupid git send-email doesn't default to
> no chain replies, really annoying)
>
> Hi,
>
> After yesterdays rant on having too many kernel threads and checking
> how many I actually have running on this system (531!), I decided to
> try and do something about it.
>
> My goal was to retain the workqueue interface instead of coming up with
> a new scheme that required conversion (or converting to slow_work which,
> btw, is an awful name :-). I also wanted to retain the affinity
> guarantees of workqueues as much as possible.
>
> So this is a first step in that direction, it's probably full of races
> and holes, but should get the idea across. It adds a
> create_lazy_workqueue() helper, similar to the other variants that we
> currently have. A lazy workqueue works like a normal workqueue, except
> that it only (by default) starts a core thread instead of threads for
> all online CPUs. When work is queued on a lazy workqueue for a CPU
> that doesn't have a thread running, it will be placed on the core CPUs
> list and that will then create and move the work to the right target.
> Should task creation fail, the queued work will be executed on the
> core CPU instead. Once a lazy workqueue thread has been idle for a
> certain amount of time, it will again exit.
>
> The patch boots here and I exercised the rpciod workqueue and
> verified that it gets created, runs on the right CPU, and exits a while
> later. So core functionality should be there, even if it has holes.
>
> With this patchset, I am now down to 280 kernel threads on one of my test
> boxes. Still too many, but it's a start and a net reduction of 251
> threads here, or 47%!
>
> The code can also be pulled from:
>
> git://git.kernel.dk/linux-2.6-block.git workqueue
>
> --
> Jens Axboe
That looks like a nice idea that may indeed solve the problem of thread
proliferation with per cpu workqueue.
Now I think there is another problem that taint the workqueues from the
beginning which is the deadlocks induced by one work that waits another
one in the same workqueue. And since the workqueues are executing the jobs
by serializing, the effect is deadlocks.
Often, drivers need to move from the central events/%d to a dedicated workqueue
because of that.
A idea to solve this:
We could have one thread per struct work_struct.
Similarly to this patchset, this thread waits for queuing requests, but only for
this work struct.
If the target cpu has no thread for this work, then create one, like you do, etc...
Then the idea is to have one workqueue per struct work_struct, which handles
per cpu task creation, etc... And this workqueue only handles the given work.
That may solve the deadlocks scenario that are often reported and lead to
dedicated workqueue creation.
That also makes disappearing the work execution serialization between different
worklets. We just keep the serialization between same work, which seems a
pretty natural thing and is less haphazard than multiple works of different
natures randomly serialized between them.
Note the effect would not only be a reducing of deadlocks but also probably
an increasing of throughput because works of different natures won't need anymore
to wait for the previous one completion.
Also a reducing of latency (a high prio work that waits for a lower prio
work).
There are good chances that we won't need any more per driver/subsys workqueue
creation after that, because everything would be per worklet.
We could use a single schedule_work() for all of them and not bother choosing
a specific workqueue or the central events/%d
Hmm?
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 4/6] libata: use lazy workqueues for the pio task
2009-08-20 10:20 ` [PATCH 4/6] libata: use lazy workqueues for the pio task Jens Axboe
@ 2009-08-20 12:40 ` Stefan Richter
2009-08-20 12:48 ` Jens Axboe
0 siblings, 1 reply; 28+ messages in thread
From: Stefan Richter @ 2009-08-20 12:40 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
Jens Axboe wrote:
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
> drivers/ata/libata-core.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
> index 072ba5e..35f74c9 100644
> --- a/drivers/ata/libata-core.c
> +++ b/drivers/ata/libata-core.c
> @@ -6580,7 +6580,7 @@ static int __init ata_init(void)
> {
> ata_parse_force_param();
>
> - ata_wq = create_workqueue("ata");
> + ata_wq = create_lazy_workqueue("ata");
> if (!ata_wq)
> goto free_force_tbl;
>
However, this does not solve the issue of lacking parallelism on UP
machines, does it?
--
Stefan Richter
-=====-==--= =--- =-=--
http://arcgraph.de/sr/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:22 ` Frederic Weisbecker
@ 2009-08-20 12:41 ` Jens Axboe
2009-08-20 13:04 ` Tejun Heo
2009-08-20 12:59 ` Steven Whitehouse
1 sibling, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 12:41 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan, Andrew Morton,
Oleg Nesterov
On Thu, Aug 20 2009, Frederic Weisbecker wrote:
> On Thu, Aug 20, 2009 at 12:19:58PM +0200, Jens Axboe wrote:
> > (sorry for the resend, but apparently the directory had some patches
> > in it already. plus, stupid git send-email doesn't default to
> > no chain replies, really annoying)
> >
> > Hi,
> >
> > After yesterdays rant on having too many kernel threads and checking
> > how many I actually have running on this system (531!), I decided to
> > try and do something about it.
> >
> > My goal was to retain the workqueue interface instead of coming up with
> > a new scheme that required conversion (or converting to slow_work which,
> > btw, is an awful name :-). I also wanted to retain the affinity
> > guarantees of workqueues as much as possible.
> >
> > So this is a first step in that direction, it's probably full of races
> > and holes, but should get the idea across. It adds a
> > create_lazy_workqueue() helper, similar to the other variants that we
> > currently have. A lazy workqueue works like a normal workqueue, except
> > that it only (by default) starts a core thread instead of threads for
> > all online CPUs. When work is queued on a lazy workqueue for a CPU
> > that doesn't have a thread running, it will be placed on the core CPUs
> > list and that will then create and move the work to the right target.
> > Should task creation fail, the queued work will be executed on the
> > core CPU instead. Once a lazy workqueue thread has been idle for a
> > certain amount of time, it will again exit.
> >
> > The patch boots here and I exercised the rpciod workqueue and
> > verified that it gets created, runs on the right CPU, and exits a while
> > later. So core functionality should be there, even if it has holes.
> >
> > With this patchset, I am now down to 280 kernel threads on one of my test
> > boxes. Still too many, but it's a start and a net reduction of 251
> > threads here, or 47%!
> >
> > The code can also be pulled from:
> >
> > git://git.kernel.dk/linux-2.6-block.git workqueue
> >
> > --
> > Jens Axboe
>
>
> That looks like a nice idea that may indeed solve the problem of
> thread proliferation with per cpu workqueue.
>
> Now I think there is another problem that taint the workqueues from
> the beginning which is the deadlocks induced by one work that waits
> another one in the same workqueue. And since the workqueues are
> executing the jobs by serializing, the effect is deadlocks.
>
> Often, drivers need to move from the central events/%d to a dedicated
> workqueue because of that.
>
> A idea to solve this:
>
> We could have one thread per struct work_struct. Similarly to this
> patchset, this thread waits for queuing requests, but only for this
> work struct. If the target cpu has no thread for this work, then
> create one, like you do, etc...
>
> Then the idea is to have one workqueue per struct work_struct, which
> handles per cpu task creation, etc... And this workqueue only handles
> the given work.
>
> That may solve the deadlocks scenario that are often reported and lead
> to dedicated workqueue creation.
>
> That also makes disappearing the work execution serialization between
> different worklets. We just keep the serialization between same work,
> which seems a pretty natural thing and is less haphazard than multiple
> works of different natures randomly serialized between them.
>
> Note the effect would not only be a reducing of deadlocks but also
> probably an increasing of throughput because works of different
> natures won't need anymore to wait for the previous one completion.
>
> Also a reducing of latency (a high prio work that waits for a lower
> prio work).
>
> There are good chances that we won't need any more per driver/subsys
> workqueue creation after that, because everything would be per
> worklet. We could use a single schedule_work() for all of them and
> not bother choosing a specific workqueue or the central events/%d
>
> Hmm?
I pretty much agree with you, my initial plan for a thread pool would be
very similar. I'll gradually work towards that goal.
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 4/6] libata: use lazy workqueues for the pio task
2009-08-20 12:40 ` Stefan Richter
@ 2009-08-20 12:48 ` Jens Axboe
0 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 12:48 UTC (permalink / raw)
To: Stefan Richter; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan
On Thu, Aug 20 2009, Stefan Richter wrote:
> Jens Axboe wrote:
>> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
>> ---
>> drivers/ata/libata-core.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
>> index 072ba5e..35f74c9 100644
>> --- a/drivers/ata/libata-core.c
>> +++ b/drivers/ata/libata-core.c
>> @@ -6580,7 +6580,7 @@ static int __init ata_init(void)
>> {
>> ata_parse_force_param();
>> - ata_wq = create_workqueue("ata");
>> + ata_wq = create_lazy_workqueue("ata");
>> if (!ata_wq)
>> goto free_force_tbl;
>>
>
> However, this does not solve the issue of lacking parallelism on UP
> machines, does it?
No, the next step is needed there, having multiple threads. Pretty
similar to what Frederic described. Note that the current implementation
doesn't really solve that either, since work will be executed on the CPU
it is queued on. So there's no existing guarantee that it works, on UP
or SMP. This implementaion doesn't modify that behaviour, it's identical
to the current workqueue implementation in that respect.
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
` (7 preceding siblings ...)
2009-08-20 12:22 ` Frederic Weisbecker
@ 2009-08-20 12:55 ` Tejun Heo
2009-08-21 6:58 ` Jens Axboe
8 siblings, 1 reply; 28+ messages in thread
From: Tejun Heo @ 2009-08-20 12:55 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, jeff, benh, bzolnier, alan
Hello, Jens.
Jens Axboe wrote:
> After yesterdays rant on having too many kernel threads and checking
> how many I actually have running on this system (531!), I decided to
> try and do something about it.
Heh... that's a lot. How many cpus do you have there? Care to share
the output of "ps -ef"?
> My goal was to retain the workqueue interface instead of coming up with
> a new scheme that required conversion (or converting to slow_work which,
> btw, is an awful name :-). I also wanted to retain the affinity
> guarantees of workqueues as much as possible.
>
> So this is a first step in that direction, it's probably full of races
> and holes, but should get the idea across. It adds a
> create_lazy_workqueue() helper, similar to the other variants that we
> currently have. A lazy workqueue works like a normal workqueue, except
> that it only (by default) starts a core thread instead of threads for
> all online CPUs. When work is queued on a lazy workqueue for a CPU
> that doesn't have a thread running, it will be placed on the core CPUs
> list and that will then create and move the work to the right target.
> Should task creation fail, the queued work will be executed on the
> core CPU instead. Once a lazy workqueue thread has been idle for a
> certain amount of time, it will again exit.
Yeap, the approach seems simple and nice and resolves the problem of
too many idle workers.
> The patch boots here and I exercised the rpciod workqueue and
> verified that it gets created, runs on the right CPU, and exits a while
> later. So core functionality should be there, even if it has holes.
>
> With this patchset, I am now down to 280 kernel threads on one of my test
> boxes. Still too many, but it's a start and a net reduction of 251
> threads here, or 47%!
I'm trying to find out whether the perfect concurrency idea I talked
about on the other thread can be implemented in reasonable manner.
Would you mind holding for a few days before investing too much effort
into expanding this one to handle multiple workers?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:22 ` Frederic Weisbecker
2009-08-20 12:41 ` Jens Axboe
@ 2009-08-20 12:59 ` Steven Whitehouse
1 sibling, 0 replies; 28+ messages in thread
From: Steven Whitehouse @ 2009-08-20 12:59 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Jens Axboe, linux-kernel, jeff, benh, htejun, bzolnier, alan,
Andrew Morton, Oleg Nesterov
Hi,
On Thu, 2009-08-20 at 14:22 +0200, Frederic Weisbecker wrote:
> On Thu, Aug 20, 2009 at 12:19:58PM +0200, Jens Axboe wrote:
> > (sorry for the resend, but apparently the directory had some patches
> > in it already. plus, stupid git send-email doesn't default to
> > no chain replies, really annoying)
> >
> > Hi,
> >
> > After yesterdays rant on having too many kernel threads and checking
> > how many I actually have running on this system (531!), I decided to
> > try and do something about it.
> >
> > My goal was to retain the workqueue interface instead of coming up with
> > a new scheme that required conversion (or converting to slow_work which,
> > btw, is an awful name :-). I also wanted to retain the affinity
> > guarantees of workqueues as much as possible.
> >
> > So this is a first step in that direction, it's probably full of races
> > and holes, but should get the idea across. It adds a
> > create_lazy_workqueue() helper, similar to the other variants that we
> > currently have. A lazy workqueue works like a normal workqueue, except
> > that it only (by default) starts a core thread instead of threads for
> > all online CPUs. When work is queued on a lazy workqueue for a CPU
> > that doesn't have a thread running, it will be placed on the core CPUs
> > list and that will then create and move the work to the right target.
> > Should task creation fail, the queued work will be executed on the
> > core CPU instead. Once a lazy workqueue thread has been idle for a
> > certain amount of time, it will again exit.
> >
> > The patch boots here and I exercised the rpciod workqueue and
> > verified that it gets created, runs on the right CPU, and exits a while
> > later. So core functionality should be there, even if it has holes.
> >
> > With this patchset, I am now down to 280 kernel threads on one of my test
> > boxes. Still too many, but it's a start and a net reduction of 251
> > threads here, or 47%!
> >
> > The code can also be pulled from:
> >
> > git://git.kernel.dk/linux-2.6-block.git workqueue
> >
> > --
> > Jens Axboe
>
>
> That looks like a nice idea that may indeed solve the problem of thread
> proliferation with per cpu workqueue.
>
> Now I think there is another problem that taint the workqueues from the
> beginning which is the deadlocks induced by one work that waits another
> one in the same workqueue. And since the workqueues are executing the jobs
> by serializing, the effect is deadlocks.
>
In GFS2 we've also got an additional issue. We cannot create threads at
the point of use (or let pending work block on thread creation) because
it implies a GFP_KERNEL memory allocation which could call back into the
fs. This is a particular issue with journal recovery (which uses
slow_work now, older versions used a private thread) and the code which
deals with inodes which have been unlinked remotely.
In addition to that the glock workqueue which we are using would be much
better turned into a tasklet, or similar. The reason why we cannot do
this is that submission of block I/O is only possible from process
context. At some stage it might be possible to partially solve the
problem by separating the parts of the state machine which submit I/O
from those which don't, but I'm not convinced that effort it worth it.
There is also the issue of ordering of I/O requests. The glocks are (for
those which submit I/O) in a 1:1 relationship with inodes or resource
groups and thus indexed by disk block number. I have considered in the
past, creating a workqueue with an elevator based work submission
interface. This would greatly improve the I/O patterns created by
multiple submissions of glock work items. In particular it would make a
big difference when the glock shrinker marks dirty glocks for removal
from the glock cache (under memory pressure) or when processing large
numbers of remote callbacks.
I've not yet come to any conclusion as to whether the "elevator
workqueue" is a good idea or not, any suggestions of a better solution
are very welcome,
Steve.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:41 ` Jens Axboe
@ 2009-08-20 13:04 ` Tejun Heo
0 siblings, 0 replies; 28+ messages in thread
From: Tejun Heo @ 2009-08-20 13:04 UTC (permalink / raw)
To: Jens Axboe
Cc: Frederic Weisbecker, linux-kernel, jeff, benh, bzolnier, alan,
Andrew Morton, Oleg Nesterov
Jens Axboe wrote:
> On Thu, Aug 20 2009, Frederic Weisbecker wrote:
>> A idea to solve this:
>>
>> We could have one thread per struct work_struct. Similarly to this
>> patchset, this thread waits for queuing requests, but only for this
>> work struct. If the target cpu has no thread for this work, then
>> create one, like you do, etc...
>>
>> Then the idea is to have one workqueue per struct work_struct, which
>> handles per cpu task creation, etc... And this workqueue only handles
>> the given work.
>>
>> That may solve the deadlocks scenario that are often reported and lead
>> to dedicated workqueue creation.
>>
>> That also makes disappearing the work execution serialization between
>> different worklets. We just keep the serialization between same work,
>> which seems a pretty natural thing and is less haphazard than multiple
>> works of different natures randomly serialized between them.
>>
>> Note the effect would not only be a reducing of deadlocks but also
>> probably an increasing of throughput because works of different
>> natures won't need anymore to wait for the previous one completion.
>>
>> Also a reducing of latency (a high prio work that waits for a lower
>> prio work).
>>
>> There are good chances that we won't need any more per driver/subsys
>> workqueue creation after that, because everything would be per
>> worklet. We could use a single schedule_work() for all of them and
>> not bother choosing a specific workqueue or the central events/%d
>>
>> Hmm?
>
> I pretty much agree with you, my initial plan for a thread pool would be
> very similar. I'll gradually work towards that goal.
Several issues that come to my mind with the above approach are...
* There will still be cases where you need fixed dedicated thread.
Execution resources for anything which might be used during IO needs
to be preallocated (at least some of it) to guarantee forward
progress.
* Depending on how popular works are used (and I think their usage
will grow with improvements like this), we might end up with many
idling threads again and please note that thread creation /
destruction is quite costly compared to what works usually do.
* Having different threads executing different works all the time
might improve latency but if works are used frequently enough it's
likely to lower throughput because short works which can be handled
in batch by a single thread now needs to be handled by different
threads. Scheduling overhead can be significant compared to what
those works actually do and it will also cost much more cache
footprint-wise.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 5/6] aio: use lazy workqueues
2009-08-20 10:20 ` [PATCH 5/6] aio: use lazy workqueues Jens Axboe
@ 2009-08-20 15:09 ` Jeff Moyer
2009-08-21 18:31 ` Zach Brown
0 siblings, 1 reply; 28+ messages in thread
From: Jeff Moyer @ 2009-08-20 15:09 UTC (permalink / raw)
To: Jens Axboe; +Cc: linux-kernel, jeff, benh, htejun, bzolnier, alan, zach.brown
Jens Axboe <jens.axboe@oracle.com> writes:
> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
> ---
> fs/aio.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index d065b2c..4103b59 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -72,7 +72,7 @@ static int __init aio_setup(void)
> kiocb_cachep = KMEM_CACHE(kiocb, SLAB_HWCACHE_ALIGN|SLAB_PANIC);
> kioctx_cachep = KMEM_CACHE(kioctx,SLAB_HWCACHE_ALIGN|SLAB_PANIC);
>
> - aio_wq = create_workqueue("aio");
> + aio_wq = create_lazy_workqueue("aio");
>
> pr_debug("aio_setup: sizeof(struct page) = %d\n", (int)sizeof(struct page));
So far as I can tell, the aio workqueue isn't used for much these days.
We could probably get away with switching to keventd. Zach, isn't
someone working on a patch to get rid of all of the -EIOCBRETRY
infrastructure? That patch would probably make things clearer in this
area.
Cheers,
Jeff
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:55 ` Tejun Heo
@ 2009-08-21 6:58 ` Jens Axboe
0 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-21 6:58 UTC (permalink / raw)
To: Tejun Heo; +Cc: linux-kernel, jeff, benh, bzolnier, alan
[-- Attachment #1: Type: text/plain, Size: 2757 bytes --]
On Thu, Aug 20 2009, Tejun Heo wrote:
> Hello, Jens.
>
> Jens Axboe wrote:
> > After yesterdays rant on having too many kernel threads and checking
> > how many I actually have running on this system (531!), I decided to
> > try and do something about it.
>
> Heh... that's a lot. How many cpus do you have there? Care to share
> the output of "ps -ef"?
That system has 64 cpus. ps -ef attached.
> > My goal was to retain the workqueue interface instead of coming up with
> > a new scheme that required conversion (or converting to slow_work which,
> > btw, is an awful name :-). I also wanted to retain the affinity
> > guarantees of workqueues as much as possible.
> >
> > So this is a first step in that direction, it's probably full of races
> > and holes, but should get the idea across. It adds a
> > create_lazy_workqueue() helper, similar to the other variants that we
> > currently have. A lazy workqueue works like a normal workqueue, except
> > that it only (by default) starts a core thread instead of threads for
> > all online CPUs. When work is queued on a lazy workqueue for a CPU
> > that doesn't have a thread running, it will be placed on the core CPUs
> > list and that will then create and move the work to the right target.
> > Should task creation fail, the queued work will be executed on the
> > core CPU instead. Once a lazy workqueue thread has been idle for a
> > certain amount of time, it will again exit.
>
> Yeap, the approach seems simple and nice and resolves the problem of
> too many idle workers.
I think so too :-)
> > The patch boots here and I exercised the rpciod workqueue and
> > verified that it gets created, runs on the right CPU, and exits a while
> > later. So core functionality should be there, even if it has holes.
> >
> > With this patchset, I am now down to 280 kernel threads on one of my test
> > boxes. Still too many, but it's a start and a net reduction of 251
> > threads here, or 47%!
>
> I'm trying to find out whether the perfect concurrency idea I talked
> about on the other thread can be implemented in reasonable manner.
> Would you mind holding for a few days before investing too much effort
> into expanding this one to handle multiple workers?
No problem, I'll just get the races closed up in the existing version.
I think we basically have two classes of users here - one that the
existing workqueue scheme works well for, high performance work
execution where CPU affinity matters. The other is just slow work
execution (like the libata pio task stuff), which would be better
handled by a generic thread pool implementation. I think we should start
converting those users to slow_work, in fact I think I'll try libata to
try and set a good example :-)
--
Jens Axboe
[-- Attachment #2: ps-ef.txt --]
[-- Type: text/plain, Size: 25896 bytes --]
UID PID PPID C STIME TTY TIME CMD
root 1 0 3 09:53 ? 00:00:06 init [2]
root 2 0 0 09:53 ? 00:00:00 [kthreadd]
root 3 2 0 09:53 ? 00:00:00 [migration/0]
root 4 2 0 09:53 ? 00:00:00 [ksoftirqd/0]
root 5 2 0 09:53 ? 00:00:00 [watchdog/0]
root 6 2 0 09:53 ? 00:00:00 [migration/1]
root 7 2 0 09:53 ? 00:00:00 [ksoftirqd/1]
root 8 2 0 09:53 ? 00:00:00 [watchdog/1]
root 9 2 0 09:53 ? 00:00:00 [migration/2]
root 10 2 0 09:53 ? 00:00:00 [ksoftirqd/2]
root 11 2 0 09:53 ? 00:00:00 [watchdog/2]
root 12 2 0 09:53 ? 00:00:00 [migration/3]
root 13 2 0 09:53 ? 00:00:00 [ksoftirqd/3]
root 14 2 0 09:53 ? 00:00:00 [watchdog/3]
root 15 2 0 09:53 ? 00:00:00 [migration/4]
root 16 2 0 09:53 ? 00:00:00 [ksoftirqd/4]
root 17 2 0 09:53 ? 00:00:00 [watchdog/4]
root 18 2 0 09:53 ? 00:00:00 [migration/5]
root 19 2 0 09:53 ? 00:00:00 [ksoftirqd/5]
root 20 2 0 09:53 ? 00:00:00 [watchdog/5]
root 21 2 0 09:53 ? 00:00:00 [migration/6]
root 22 2 0 09:53 ? 00:00:00 [ksoftirqd/6]
root 23 2 0 09:53 ? 00:00:00 [watchdog/6]
root 24 2 0 09:53 ? 00:00:00 [migration/7]
root 25 2 0 09:53 ? 00:00:00 [ksoftirqd/7]
root 26 2 0 09:53 ? 00:00:00 [watchdog/7]
root 27 2 0 09:53 ? 00:00:00 [migration/8]
root 28 2 0 09:53 ? 00:00:00 [ksoftirqd/8]
root 29 2 0 09:53 ? 00:00:00 [watchdog/8]
root 30 2 0 09:53 ? 00:00:00 [migration/9]
root 31 2 0 09:53 ? 00:00:00 [ksoftirqd/9]
root 32 2 0 09:53 ? 00:00:00 [watchdog/9]
root 33 2 0 09:53 ? 00:00:00 [migration/10]
root 34 2 0 09:53 ? 00:00:00 [ksoftirqd/10]
root 35 2 0 09:53 ? 00:00:00 [watchdog/10]
root 36 2 0 09:53 ? 00:00:00 [migration/11]
root 37 2 0 09:53 ? 00:00:00 [ksoftirqd/11]
root 38 2 0 09:53 ? 00:00:00 [watchdog/11]
root 39 2 0 09:53 ? 00:00:00 [migration/12]
root 40 2 0 09:53 ? 00:00:00 [ksoftirqd/12]
root 41 2 0 09:53 ? 00:00:00 [watchdog/12]
root 42 2 0 09:53 ? 00:00:00 [migration/13]
root 43 2 0 09:53 ? 00:00:00 [ksoftirqd/13]
root 44 2 0 09:53 ? 00:00:00 [watchdog/13]
root 45 2 0 09:53 ? 00:00:00 [migration/14]
root 46 2 0 09:53 ? 00:00:00 [ksoftirqd/14]
root 47 2 0 09:53 ? 00:00:00 [watchdog/14]
root 48 2 0 09:53 ? 00:00:00 [migration/15]
root 49 2 0 09:53 ? 00:00:00 [ksoftirqd/15]
root 50 2 0 09:53 ? 00:00:00 [watchdog/15]
root 51 2 0 09:53 ? 00:00:00 [migration/16]
root 52 2 0 09:53 ? 00:00:00 [ksoftirqd/16]
root 53 2 0 09:53 ? 00:00:00 [watchdog/16]
root 54 2 0 09:53 ? 00:00:00 [migration/17]
root 55 2 0 09:53 ? 00:00:00 [ksoftirqd/17]
root 56 2 0 09:53 ? 00:00:00 [watchdog/17]
root 57 2 0 09:53 ? 00:00:00 [migration/18]
root 58 2 0 09:53 ? 00:00:00 [ksoftirqd/18]
root 59 2 0 09:53 ? 00:00:00 [watchdog/18]
root 60 2 0 09:53 ? 00:00:00 [migration/19]
root 61 2 0 09:53 ? 00:00:00 [ksoftirqd/19]
root 62 2 0 09:53 ? 00:00:00 [watchdog/19]
root 63 2 0 09:53 ? 00:00:00 [migration/20]
root 64 2 0 09:53 ? 00:00:00 [ksoftirqd/20]
root 65 2 0 09:53 ? 00:00:00 [watchdog/20]
root 66 2 0 09:53 ? 00:00:00 [migration/21]
root 67 2 0 09:53 ? 00:00:00 [ksoftirqd/21]
root 68 2 0 09:53 ? 00:00:00 [watchdog/21]
root 69 2 0 09:53 ? 00:00:00 [migration/22]
root 70 2 0 09:53 ? 00:00:00 [ksoftirqd/22]
root 71 2 0 09:53 ? 00:00:00 [watchdog/22]
root 72 2 0 09:53 ? 00:00:00 [migration/23]
root 73 2 0 09:53 ? 00:00:00 [ksoftirqd/23]
root 74 2 0 09:53 ? 00:00:00 [watchdog/23]
root 75 2 0 09:53 ? 00:00:00 [migration/24]
root 76 2 0 09:53 ? 00:00:00 [ksoftirqd/24]
root 77 2 0 09:53 ? 00:00:00 [watchdog/24]
root 78 2 0 09:53 ? 00:00:00 [migration/25]
root 79 2 0 09:53 ? 00:00:00 [ksoftirqd/25]
root 80 2 0 09:53 ? 00:00:00 [watchdog/25]
root 81 2 0 09:53 ? 00:00:00 [migration/26]
root 82 2 0 09:53 ? 00:00:00 [ksoftirqd/26]
root 83 2 0 09:53 ? 00:00:00 [watchdog/26]
root 84 2 0 09:53 ? 00:00:00 [migration/27]
root 85 2 0 09:53 ? 00:00:00 [ksoftirqd/27]
root 86 2 0 09:53 ? 00:00:00 [watchdog/27]
root 87 2 0 09:53 ? 00:00:00 [migration/28]
root 88 2 0 09:53 ? 00:00:00 [ksoftirqd/28]
root 89 2 0 09:53 ? 00:00:00 [watchdog/28]
root 90 2 0 09:53 ? 00:00:00 [migration/29]
root 91 2 0 09:53 ? 00:00:00 [ksoftirqd/29]
root 92 2 0 09:53 ? 00:00:00 [watchdog/29]
root 93 2 0 09:53 ? 00:00:00 [migration/30]
root 94 2 0 09:53 ? 00:00:00 [ksoftirqd/30]
root 95 2 0 09:53 ? 00:00:00 [watchdog/30]
root 96 2 0 09:53 ? 00:00:00 [migration/31]
root 97 2 0 09:53 ? 00:00:00 [ksoftirqd/31]
root 98 2 0 09:53 ? 00:00:00 [watchdog/31]
root 99 2 0 09:53 ? 00:00:00 [migration/32]
root 100 2 0 09:53 ? 00:00:00 [ksoftirqd/32]
root 101 2 0 09:53 ? 00:00:00 [watchdog/32]
root 102 2 0 09:53 ? 00:00:00 [migration/33]
root 103 2 0 09:53 ? 00:00:00 [ksoftirqd/33]
root 104 2 0 09:53 ? 00:00:00 [watchdog/33]
root 105 2 0 09:53 ? 00:00:00 [migration/34]
root 106 2 0 09:53 ? 00:00:00 [ksoftirqd/34]
root 107 2 0 09:53 ? 00:00:00 [watchdog/34]
root 108 2 0 09:53 ? 00:00:00 [migration/35]
root 109 2 0 09:53 ? 00:00:00 [ksoftirqd/35]
root 110 2 0 09:53 ? 00:00:00 [watchdog/35]
root 111 2 0 09:53 ? 00:00:00 [migration/36]
root 112 2 0 09:53 ? 00:00:00 [ksoftirqd/36]
root 113 2 0 09:53 ? 00:00:00 [watchdog/36]
root 114 2 0 09:53 ? 00:00:00 [migration/37]
root 115 2 0 09:53 ? 00:00:00 [ksoftirqd/37]
root 116 2 0 09:53 ? 00:00:00 [watchdog/37]
root 117 2 0 09:53 ? 00:00:00 [migration/38]
root 118 2 0 09:53 ? 00:00:00 [ksoftirqd/38]
root 119 2 0 09:53 ? 00:00:00 [watchdog/38]
root 120 2 0 09:53 ? 00:00:00 [migration/39]
root 121 2 0 09:53 ? 00:00:00 [ksoftirqd/39]
root 122 2 0 09:53 ? 00:00:00 [watchdog/39]
root 123 2 0 09:53 ? 00:00:00 [migration/40]
root 124 2 0 09:53 ? 00:00:00 [ksoftirqd/40]
root 125 2 0 09:53 ? 00:00:00 [watchdog/40]
root 126 2 0 09:53 ? 00:00:00 [migration/41]
root 127 2 0 09:53 ? 00:00:00 [ksoftirqd/41]
root 128 2 0 09:53 ? 00:00:00 [watchdog/41]
root 129 2 0 09:53 ? 00:00:00 [migration/42]
root 130 2 0 09:53 ? 00:00:00 [ksoftirqd/42]
root 131 2 0 09:53 ? 00:00:00 [watchdog/42]
root 132 2 0 09:53 ? 00:00:00 [migration/43]
root 133 2 0 09:53 ? 00:00:00 [ksoftirqd/43]
root 134 2 0 09:53 ? 00:00:00 [watchdog/43]
root 135 2 0 09:53 ? 00:00:00 [migration/44]
root 136 2 0 09:53 ? 00:00:00 [ksoftirqd/44]
root 137 2 0 09:53 ? 00:00:00 [watchdog/44]
root 138 2 0 09:53 ? 00:00:00 [migration/45]
root 139 2 0 09:53 ? 00:00:00 [ksoftirqd/45]
root 140 2 0 09:53 ? 00:00:00 [watchdog/45]
root 141 2 0 09:53 ? 00:00:00 [migration/46]
root 142 2 0 09:53 ? 00:00:00 [ksoftirqd/46]
root 143 2 0 09:53 ? 00:00:00 [watchdog/46]
root 144 2 0 09:53 ? 00:00:00 [migration/47]
root 145 2 0 09:53 ? 00:00:00 [ksoftirqd/47]
root 146 2 0 09:53 ? 00:00:00 [watchdog/47]
root 147 2 0 09:53 ? 00:00:00 [migration/48]
root 148 2 0 09:53 ? 00:00:00 [ksoftirqd/48]
root 149 2 0 09:53 ? 00:00:00 [watchdog/48]
root 150 2 0 09:53 ? 00:00:00 [migration/49]
root 151 2 0 09:53 ? 00:00:00 [ksoftirqd/49]
root 152 2 0 09:53 ? 00:00:00 [watchdog/49]
root 153 2 0 09:53 ? 00:00:00 [migration/50]
root 154 2 0 09:53 ? 00:00:00 [ksoftirqd/50]
root 155 2 0 09:53 ? 00:00:00 [watchdog/50]
root 156 2 0 09:53 ? 00:00:00 [migration/51]
root 157 2 0 09:53 ? 00:00:00 [ksoftirqd/51]
root 158 2 0 09:53 ? 00:00:00 [watchdog/51]
root 159 2 0 09:53 ? 00:00:00 [migration/52]
root 160 2 0 09:53 ? 00:00:00 [ksoftirqd/52]
root 161 2 0 09:53 ? 00:00:00 [watchdog/52]
root 162 2 0 09:53 ? 00:00:00 [migration/53]
root 163 2 0 09:53 ? 00:00:00 [ksoftirqd/53]
root 164 2 0 09:53 ? 00:00:00 [watchdog/53]
root 165 2 0 09:53 ? 00:00:00 [migration/54]
root 166 2 0 09:53 ? 00:00:00 [ksoftirqd/54]
root 167 2 0 09:53 ? 00:00:00 [watchdog/54]
root 168 2 0 09:53 ? 00:00:00 [migration/55]
root 169 2 0 09:53 ? 00:00:00 [ksoftirqd/55]
root 170 2 0 09:53 ? 00:00:00 [watchdog/55]
root 171 2 0 09:53 ? 00:00:00 [migration/56]
root 172 2 0 09:53 ? 00:00:00 [ksoftirqd/56]
root 173 2 0 09:53 ? 00:00:00 [watchdog/56]
root 174 2 0 09:53 ? 00:00:00 [migration/57]
root 175 2 0 09:53 ? 00:00:00 [ksoftirqd/57]
root 176 2 0 09:53 ? 00:00:00 [watchdog/57]
root 177 2 0 09:53 ? 00:00:00 [migration/58]
root 178 2 0 09:53 ? 00:00:00 [ksoftirqd/58]
root 179 2 0 09:53 ? 00:00:00 [watchdog/58]
root 180 2 0 09:53 ? 00:00:00 [migration/59]
root 181 2 0 09:53 ? 00:00:00 [ksoftirqd/59]
root 182 2 0 09:53 ? 00:00:00 [watchdog/59]
root 183 2 0 09:53 ? 00:00:00 [migration/60]
root 184 2 0 09:53 ? 00:00:00 [ksoftirqd/60]
root 185 2 0 09:53 ? 00:00:00 [watchdog/60]
root 186 2 0 09:53 ? 00:00:00 [migration/61]
root 187 2 0 09:53 ? 00:00:00 [ksoftirqd/61]
root 188 2 0 09:53 ? 00:00:00 [watchdog/61]
root 189 2 0 09:53 ? 00:00:00 [migration/62]
root 190 2 0 09:53 ? 00:00:00 [ksoftirqd/62]
root 191 2 0 09:53 ? 00:00:00 [watchdog/62]
root 192 2 0 09:53 ? 00:00:00 [migration/63]
root 193 2 0 09:53 ? 00:00:00 [ksoftirqd/63]
root 194 2 0 09:53 ? 00:00:00 [watchdog/63]
root 195 2 0 09:53 ? 00:00:00 [events/0]
root 196 2 0 09:53 ? 00:00:00 [events/1]
root 197 2 0 09:53 ? 00:00:00 [events/2]
root 198 2 0 09:53 ? 00:00:00 [events/3]
root 199 2 0 09:53 ? 00:00:00 [events/4]
root 200 2 0 09:53 ? 00:00:00 [events/5]
root 201 2 0 09:53 ? 00:00:00 [events/6]
root 202 2 0 09:53 ? 00:00:00 [events/7]
root 203 2 0 09:53 ? 00:00:00 [events/8]
root 204 2 0 09:53 ? 00:00:00 [events/9]
root 205 2 0 09:53 ? 00:00:00 [events/10]
root 206 2 0 09:53 ? 00:00:00 [events/11]
root 207 2 0 09:53 ? 00:00:00 [events/12]
root 208 2 0 09:53 ? 00:00:00 [events/13]
root 209 2 0 09:53 ? 00:00:00 [events/14]
root 210 2 0 09:53 ? 00:00:00 [events/15]
root 211 2 0 09:53 ? 00:00:00 [events/16]
root 212 2 0 09:53 ? 00:00:00 [events/17]
root 213 2 0 09:53 ? 00:00:00 [events/18]
root 214 2 0 09:53 ? 00:00:00 [events/19]
root 215 2 0 09:53 ? 00:00:00 [events/20]
root 216 2 0 09:53 ? 00:00:00 [events/21]
root 217 2 0 09:53 ? 00:00:00 [events/22]
root 218 2 0 09:53 ? 00:00:00 [events/23]
root 219 2 0 09:53 ? 00:00:00 [events/24]
root 220 2 0 09:53 ? 00:00:00 [events/25]
root 221 2 0 09:53 ? 00:00:00 [events/26]
root 222 2 0 09:53 ? 00:00:00 [events/27]
root 223 2 0 09:53 ? 00:00:00 [events/28]
root 224 2 0 09:53 ? 00:00:00 [events/29]
root 225 2 0 09:53 ? 00:00:00 [events/30]
root 226 2 0 09:53 ? 00:00:00 [events/31]
root 227 2 0 09:53 ? 00:00:00 [events/32]
root 228 2 0 09:53 ? 00:00:00 [events/33]
root 229 2 0 09:53 ? 00:00:00 [events/34]
root 230 2 0 09:53 ? 00:00:00 [events/35]
root 231 2 0 09:53 ? 00:00:00 [events/36]
root 232 2 0 09:53 ? 00:00:00 [events/37]
root 233 2 0 09:53 ? 00:00:00 [events/38]
root 234 2 0 09:53 ? 00:00:00 [events/39]
root 235 2 0 09:53 ? 00:00:00 [events/40]
root 236 2 0 09:53 ? 00:00:00 [events/41]
root 237 2 0 09:53 ? 00:00:00 [events/42]
root 238 2 0 09:53 ? 00:00:00 [events/43]
root 239 2 0 09:53 ? 00:00:00 [events/44]
root 240 2 0 09:53 ? 00:00:00 [events/45]
root 241 2 0 09:53 ? 00:00:00 [events/46]
root 242 2 0 09:53 ? 00:00:00 [events/47]
root 243 2 0 09:53 ? 00:00:00 [events/48]
root 244 2 0 09:53 ? 00:00:00 [events/49]
root 245 2 0 09:53 ? 00:00:00 [events/50]
root 246 2 0 09:53 ? 00:00:00 [events/51]
root 247 2 0 09:53 ? 00:00:00 [events/52]
root 248 2 0 09:53 ? 00:00:00 [events/53]
root 249 2 0 09:53 ? 00:00:00 [events/54]
root 250 2 0 09:53 ? 00:00:00 [events/55]
root 251 2 0 09:53 ? 00:00:00 [events/56]
root 252 2 0 09:53 ? 00:00:00 [events/57]
root 253 2 0 09:53 ? 00:00:00 [events/58]
root 254 2 0 09:53 ? 00:00:00 [events/59]
root 255 2 0 09:53 ? 00:00:00 [events/60]
root 256 2 0 09:53 ? 00:00:00 [events/61]
root 257 2 0 09:53 ? 00:00:00 [events/62]
root 258 2 0 09:53 ? 00:00:00 [events/63]
root 259 2 0 09:53 ? 00:00:00 [khelper]
root 264 2 0 09:53 ? 00:00:00 [async/mgr]
root 432 2 0 09:53 ? 00:00:00 [sync_supers]
root 434 2 0 09:53 ? 00:00:00 [bdi-default]
root 435 2 0 09:53 ? 00:00:00 [kblockd/0]
root 436 2 0 09:53 ? 00:00:00 [kblockd/1]
root 437 2 0 09:53 ? 00:00:00 [kblockd/2]
root 438 2 0 09:53 ? 00:00:00 [kblockd/3]
root 439 2 0 09:53 ? 00:00:00 [kblockd/4]
root 440 2 0 09:53 ? 00:00:00 [kblockd/5]
root 441 2 0 09:53 ? 00:00:00 [kblockd/6]
root 442 2 0 09:53 ? 00:00:00 [kblockd/7]
root 443 2 0 09:53 ? 00:00:00 [kblockd/8]
root 444 2 0 09:53 ? 00:00:00 [kblockd/9]
root 445 2 0 09:53 ? 00:00:00 [kblockd/10]
root 446 2 0 09:53 ? 00:00:00 [kblockd/11]
root 447 2 0 09:53 ? 00:00:00 [kblockd/12]
root 448 2 0 09:53 ? 00:00:00 [kblockd/13]
root 449 2 0 09:53 ? 00:00:00 [kblockd/14]
root 450 2 0 09:53 ? 00:00:00 [kblockd/15]
root 451 2 0 09:53 ? 00:00:00 [kblockd/16]
root 452 2 0 09:53 ? 00:00:00 [kblockd/17]
root 453 2 0 09:53 ? 00:00:00 [kblockd/18]
root 454 2 0 09:53 ? 00:00:00 [kblockd/19]
root 455 2 0 09:53 ? 00:00:00 [kblockd/20]
root 456 2 0 09:53 ? 00:00:00 [kblockd/21]
root 457 2 0 09:53 ? 00:00:00 [kblockd/22]
root 458 2 0 09:53 ? 00:00:00 [kblockd/23]
root 459 2 0 09:53 ? 00:00:00 [kblockd/24]
root 460 2 0 09:53 ? 00:00:00 [kblockd/25]
root 461 2 0 09:53 ? 00:00:00 [kblockd/26]
root 462 2 0 09:53 ? 00:00:00 [kblockd/27]
root 463 2 0 09:53 ? 00:00:00 [kblockd/28]
root 464 2 0 09:53 ? 00:00:00 [kblockd/29]
root 465 2 0 09:53 ? 00:00:00 [kblockd/30]
root 466 2 0 09:53 ? 00:00:00 [kblockd/31]
root 467 2 0 09:53 ? 00:00:00 [kblockd/32]
root 468 2 0 09:53 ? 00:00:00 [kblockd/33]
root 469 2 0 09:53 ? 00:00:00 [kblockd/34]
root 470 2 0 09:53 ? 00:00:00 [kblockd/35]
root 471 2 0 09:53 ? 00:00:00 [kblockd/36]
root 472 2 0 09:53 ? 00:00:00 [kblockd/37]
root 473 2 0 09:53 ? 00:00:00 [kblockd/38]
root 474 2 0 09:53 ? 00:00:00 [kblockd/39]
root 475 2 0 09:53 ? 00:00:00 [kblockd/40]
root 476 2 0 09:53 ? 00:00:00 [kblockd/41]
root 477 2 0 09:53 ? 00:00:00 [kblockd/42]
root 478 2 0 09:53 ? 00:00:00 [kblockd/43]
root 479 2 0 09:53 ? 00:00:00 [kblockd/44]
root 480 2 0 09:53 ? 00:00:00 [kblockd/45]
root 481 2 0 09:53 ? 00:00:00 [kblockd/46]
root 482 2 0 09:53 ? 00:00:00 [kblockd/47]
root 483 2 0 09:53 ? 00:00:00 [kblockd/48]
root 484 2 0 09:53 ? 00:00:00 [kblockd/49]
root 485 2 0 09:53 ? 00:00:00 [kblockd/50]
root 486 2 0 09:53 ? 00:00:00 [kblockd/51]
root 487 2 0 09:53 ? 00:00:00 [kblockd/52]
root 488 2 0 09:53 ? 00:00:00 [kblockd/53]
root 489 2 0 09:53 ? 00:00:00 [kblockd/54]
root 490 2 0 09:53 ? 00:00:00 [kblockd/55]
root 491 2 0 09:53 ? 00:00:00 [kblockd/56]
root 492 2 0 09:53 ? 00:00:00 [kblockd/57]
root 493 2 0 09:53 ? 00:00:00 [kblockd/58]
root 494 2 0 09:53 ? 00:00:00 [kblockd/59]
root 495 2 0 09:53 ? 00:00:00 [kblockd/60]
root 496 2 0 09:53 ? 00:00:00 [kblockd/61]
root 497 2 0 09:53 ? 00:00:00 [kblockd/62]
root 498 2 0 09:53 ? 00:00:00 [kblockd/63]
root 500 2 0 09:53 ? 00:00:00 [kacpid]
root 501 2 0 09:53 ? 00:00:00 [kacpi_notify]
root 502 2 0 09:53 ? 00:00:00 [kacpi_hotplug]
root 720 2 0 09:53 ? 00:00:00 [ata/0]
root 721 2 0 09:53 ? 00:00:00 [ata_aux]
root 723 2 0 09:53 ? 00:00:00 [kseriod]
root 757 2 0 09:53 ? 00:00:00 [kondemand/0]
root 1287 2 0 09:53 ? 00:00:00 [khungtaskd]
root 1288 2 0 09:53 ? 00:00:00 [kswapd0]
root 1335 2 0 09:53 ? 00:00:00 [aio/0]
root 1349 2 0 09:53 ? 00:00:00 [nfsiod]
root 2154 2 0 09:53 ? 00:00:00 [scsi_eh_0]
root 2181 2 0 09:53 ? 00:00:00 [scsi_eh_1]
root 2184 2 0 09:53 ? 00:00:00 [scsi_eh_2]
root 2186 2 0 09:53 ? 00:00:00 [scsi_eh_3]
root 2188 2 0 09:53 ? 00:00:00 [scsi_eh_4]
root 2190 2 0 09:53 ? 00:00:00 [scsi_eh_5]
root 2192 2 0 09:53 ? 00:00:00 [scsi_eh_6]
root 2223 2 0 09:53 ? 00:00:00 [kpsmoused]
root 2227 2 0 09:53 ? 00:00:00 [rpciod/0]
root 2245 2 0 09:53 ? 00:00:00 [kondemand/1]
root 2246 2 0 09:53 ? 00:00:00 [kondemand/2]
root 2247 2 0 09:53 ? 00:00:00 [kondemand/4]
root 2278 2 0 09:53 ? 00:00:00 [kondemand/36]
root 2279 2 0 09:53 ? 00:00:00 [kondemand/39]
root 2301 2 0 09:53 ? 00:00:00 [kondemand/63]
root 2304 2 0 09:53 ? 00:00:00 [kondemand/43]
root 2308 2 0 09:53 ? 00:00:00 [kondemand/37]
root 2309 2 0 09:53 ? 00:00:00 [kondemand/32]
root 2310 2 0 09:53 ? 00:00:00 [kondemand/8]
root 2313 2 0 09:53 ? 00:00:00 [kjournald]
root 2314 2 0 09:53 ? 00:00:00 [kondemand/44]
root 2317 2 0 09:53 ? 00:00:00 [kondemand/41]
root 2322 2 0 09:53 ? 00:00:00 [kondemand/52]
root 2327 2 0 09:53 ? 00:00:00 [kondemand/60]
root 2329 2 0 09:53 ? 00:00:00 [kondemand/48]
root 2331 2 0 09:53 ? 00:00:00 [kondemand/56]
root 2352 2 0 09:53 ? 00:00:00 [kondemand/28]
root 2369 2 0 09:53 ? 00:00:00 [kondemand/16]
root 2383 1 1 09:53 ? 00:00:03 udevd --daemon
root 2392 2 0 09:53 ? 00:00:00 [kondemand/40]
root 2395 2 0 09:53 ? 00:00:00 [kondemand/45]
root 2398 2 0 09:53 ? 00:00:00 [kondemand/11]
root 2401 2 0 09:53 ? 00:00:00 [kondemand/33]
root 2427 2 0 09:53 ? 00:00:00 [kondemand/49]
root 2437 2 0 09:53 ? 00:00:00 [kondemand/47]
root 2442 2 0 09:53 ? 00:00:00 [kondemand/13]
root 2447 2 0 09:53 ? 00:00:00 [kondemand/51]
root 2451 2 0 09:53 ? 00:00:00 [kondemand/55]
root 2452 2 0 09:53 ? 00:00:00 [kondemand/59]
root 2474 2 0 09:53 ? 00:00:00 [kondemand/53]
root 2480 2 0 09:53 ? 00:00:00 [kondemand/57]
root 2515 2 0 09:53 ? 00:00:00 [kondemand/7]
root 2564 2 0 09:53 ? 00:00:00 [kondemand/61]
root 2577 2 0 09:53 ? 00:00:00 [kondemand/35]
root 3648 2 0 09:53 ? 00:00:00 [ksuspend_usbd]
root 3655 2 0 09:53 ? 00:00:00 [khubd]
root 3710 2 0 09:53 ? 00:00:00 [mpt_poll_0]
root 3711 2 0 09:53 ? 00:00:00 [mpt/0]
root 3873 2 0 09:53 ? 00:00:00 [kondemand/20]
root 3901 2 0 09:53 ? 00:00:00 [usbhid_resumer]
root 3931 2 0 09:53 ? 00:00:00 [kondemand/29]
root 3932 2 0 09:53 ? 00:00:00 [kondemand/5]
root 3987 2 0 09:53 ? 00:00:00 [kondemand/9]
root 4094 2 0 09:53 ? 00:00:00 [scsi_eh_7]
root 4109 2 0 09:53 ? 00:00:00 [kondemand/12]
root 4130 2 0 09:53 ? 00:00:00 [kondemand/17]
root 4132 2 0 09:53 ? 00:00:00 [kondemand/21]
root 4199 2 0 09:53 ? 00:00:00 [kondemand/25]
root 4459 2 0 09:54 ? 00:00:00 [kjournald]
root 4525 2 0 09:54 ? 00:00:00 [flush-8:0]
root 4543 1 0 09:54 ? 00:00:00 dhclient3 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp3/dhclient.eth0.leases eth0
daemon 4560 1 0 09:54 ? 00:00:00 /sbin/portmap
statd 4571 1 0 09:54 ? 00:00:00 /sbin/rpc.statd
root 4732 1 0 09:54 ? 00:00:00 /usr/sbin/rsyslogd -c3
root 4743 1 0 09:54 ? 00:00:00 /usr/sbin/acpid
root 4756 1 0 09:54 ? 00:00:00 /usr/sbin/sshd
101 5061 1 0 09:54 ? 00:00:00 /usr/sbin/exim4 -bd -q30m
daemon 5088 1 0 09:54 ? 00:00:00 /usr/sbin/atd
root 5108 1 0 09:54 ? 00:00:00 /usr/sbin/cron
root 5125 1 0 09:54 tty1 00:00:00 /sbin/getty 38400 tty1
root 5126 1 0 09:54 tty2 00:00:00 /sbin/getty 38400 tty2
root 5127 1 0 09:54 tty3 00:00:00 /sbin/getty 38400 tty3
root 5128 1 0 09:54 tty4 00:00:00 /sbin/getty 38400 tty4
root 5129 1 0 09:54 tty5 00:00:00 /sbin/getty 38400 tty5
root 5130 1 0 09:54 tty6 00:00:00 /sbin/getty 38400 tty6
root 5159 2 0 09:55 ? 00:00:00 [kondemand/38]
root 5182 4756 1 09:56 ? 00:00:00 sshd: axboe [priv]
axboe 5186 5182 0 09:56 ? 00:00:00 sshd: axboe@pts/0
axboe 5187 5186 0 09:56 pts/0 00:00:00 -bash
axboe 5190 5187 0 09:56 pts/0 00:00:00 ps -ef
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 5/6] aio: use lazy workqueues
2009-08-20 15:09 ` Jeff Moyer
@ 2009-08-21 18:31 ` Zach Brown
0 siblings, 0 replies; 28+ messages in thread
From: Zach Brown @ 2009-08-21 18:31 UTC (permalink / raw)
To: Jeff Moyer; +Cc: Jens Axboe, linux-kernel, jeff, benh, htejun, bzolnier, alan
> So far as I can tell, the aio workqueue isn't used for much these days.
> We could probably get away with switching to keventd.
It's only used by drivers/usb/gadget to implement O_DIRECT reads by
DMAing into kmalloc()ed memory and then performing the copy_to_user() in
the retry thread's task context after it has assumed the submitting
task's mm.
> Zach, isn't
> someone working on a patch to get rid of all of the -EIOCBRETRY
> infrastructure? That patch would probably make things clearer in this
> area.
Yeah, a startling amount of fs/aio.c vanishes if we get rid of
EIOCBRETRY. I'm puttering away at it, but I'll be on holiday next week
so it'll be a while before anything emerges.
- z
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-20 12:16 ` Peter Zijlstra
@ 2009-08-23 2:42 ` Junio C Hamano
2009-08-24 7:04 ` git send-email defaults Peter Zijlstra
2009-08-24 8:04 ` [PATCH 0/6] Lazy workqueues Jens Axboe
0 siblings, 2 replies; 28+ messages in thread
From: Junio C Hamano @ 2009-08-23 2:42 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Jens Axboe, linux-kernel, jeff, benh, htejun, bzolnier, alan
Peter Zijlstra <peterz@infradead.org> writes:
> On Thu, 2009-08-20 at 14:08 +0200, Jens Axboe wrote:
> ...
>> That's pretty new... But perhaps I should complain too, it's been
>> annoying me forever.
>
> http://marc.info/?l=git&m=123457137328461&w=2
>
> Apparently it didn't happen, nor did I ever see a reply to that posting.
>
> Junio, what happened here?
Nothing happened.
I do not recall anybody objecting to, but then when nothing happened in
neither 1.6.3 nor 1.6.4, nobody jumped up-and-down demanding the change of
default either. So overall impression I got from this was that nobody
really cared deeply enough either way.
But we are talking about 1.7.0 to become a release to correct wrong
defaults we have had once and for all ;-), and I am tempted to roll this
topic into the mix. Here is what I queued to my 'next' branch tonight.
-- >8 --
From: Junio C Hamano <gitster@pobox.com>
Date: Sat, 22 Aug 2009 12:48:48 -0700
Subject: [PATCH] send-email: make --no-chain-reply-to the default
In http://article.gmane.org/gmane.comp.version-control.git/109790 I
threatened to announce a change to the default threading style used by
send-email to no-chain-reply-to (i.e. the second and subsequent messages
will all be replies to the first one), unless nobody objected, in 1.6.3.
Nobody objected, as far as I can dig the list archive. But when nothing
happened in 1.6.3 nor 1.6.4, nobody from the camp who complained loudly
that led to the message did not complain either.
So I am guessing that after all nobody cares about this. But 1.7.0 is a
good time to change this, and as I said in the message, I personally think
it is a good change, so here it is.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
---
Documentation/git-send-email.txt | 6 +++---
git-send-email.perl | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/Documentation/git-send-email.txt b/Documentation/git-send-email.txt
index 767cf4d..626c2dc 100644
--- a/Documentation/git-send-email.txt
+++ b/Documentation/git-send-email.txt
@@ -84,7 +84,7 @@ See the CONFIGURATION section for 'sendemail.multiedit'.
--in-reply-to=<identifier>::
Specify the contents of the first In-Reply-To header.
Subsequent emails will refer to the previous email
- instead of this if --chain-reply-to is set (the default)
+ instead of this if --chain-reply-to is set.
Only necessary if --compose is also set. If --compose
is not set, this will be prompted for.
@@ -171,8 +171,8 @@ Automating
email sent. If disabled with "--no-chain-reply-to", all emails after
the first will be sent as replies to the first email sent. When using
this, it is recommended that the first file given be an overview of the
- entire patch series. Default is the value of the 'sendemail.chainreplyto'
- configuration value; if that is unspecified, default to --chain-reply-to.
+ entire patch series. Disabled by default, but the 'sendemail.chainreplyto'
+ configuration variable can be used to enable it.
--identity=<identity>::
A configuration identity. When given, causes values in the
diff --git a/git-send-email.perl b/git-send-email.perl
index 0700d80..c1d0930 100755
--- a/git-send-email.perl
+++ b/git-send-email.perl
@@ -71,7 +71,7 @@ git send-email [options] <file | directory | rev-list options >
--suppress-cc <str> * author, self, sob, cc, cccmd, body, bodycc, all.
--[no-]signed-off-by-cc * Send to Signed-off-by: addresses. Default on.
--[no-]suppress-from * Send to self. Default off.
- --[no-]chain-reply-to * Chain In-Reply-To: fields. Default on.
+ --[no-]chain-reply-to * Chain In-Reply-To: fields. Default off.
--[no-]thread * Use In-Reply-To: field. Default on.
Administering:
@@ -188,7 +188,7 @@ my (@suppress_cc);
my %config_bool_settings = (
"thread" => [\$thread, 1],
- "chainreplyto" => [\$chain_reply_to, 1],
+ "chainreplyto" => [\$chain_reply_to, undef],
"suppressfrom" => [\$suppress_from, undef],
"signedoffbycc" => [\$signed_off_by_cc, undef],
"signedoffcc" => [\$signed_off_by_cc, undef], # Deprecated
--
1.6.4.1.255.g5556a
^ permalink raw reply related [flat|nested] 28+ messages in thread
* git send-email defaults
2009-08-23 2:42 ` Junio C Hamano
@ 2009-08-24 7:04 ` Peter Zijlstra
2009-08-24 8:04 ` [PATCH 0/6] Lazy workqueues Jens Axboe
1 sibling, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2009-08-24 7:04 UTC (permalink / raw)
To: Junio C Hamano
Cc: Jens Axboe, linux-kernel, jeff, benh, htejun, bzolnier, alan
On Sat, 2009-08-22 at 19:42 -0700, Junio C Hamano wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
>
> > On Thu, 2009-08-20 at 14:08 +0200, Jens Axboe wrote:
> > ...
> >> That's pretty new... But perhaps I should complain too, it's been
> >> annoying me forever.
> >
> > http://marc.info/?l=git&m=123457137328461&w=2
> >
> > Apparently it didn't happen, nor did I ever see a reply to that posting.
> >
> > Junio, what happened here?
>
> Nothing happened.
>
> I do not recall anybody objecting to, but then when nothing happened in
> neither 1.6.3 nor 1.6.4, nobody jumped up-and-down demanding the change of
> default either. So overall impression I got from this was that nobody
> really cared deeply enough either way.
And here I was thinking it was settled when no objections came ;-)
> But we are talking about 1.7.0 to become a release to correct wrong
> defaults we have had once and for all ;-), and I am tempted to roll this
> topic into the mix. Here is what I queued to my 'next' branch tonight.
The sooner this hits the distros the better..
Thanks for committing the change, looking fwd to 1.7
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-23 2:42 ` Junio C Hamano
2009-08-24 7:04 ` git send-email defaults Peter Zijlstra
@ 2009-08-24 8:04 ` Jens Axboe
2009-08-24 9:03 ` Junio C Hamano
1 sibling, 1 reply; 28+ messages in thread
From: Jens Axboe @ 2009-08-24 8:04 UTC (permalink / raw)
To: Junio C Hamano
Cc: Peter Zijlstra, linux-kernel, jeff, benh, htejun, bzolnier, alan
On Sat, Aug 22 2009, Junio C Hamano wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
>
> > On Thu, 2009-08-20 at 14:08 +0200, Jens Axboe wrote:
> > ...
> >> That's pretty new... But perhaps I should complain too, it's been
> >> annoying me forever.
> >
> > http://marc.info/?l=git&m=123457137328461&w=2
> >
> > Apparently it didn't happen, nor did I ever see a reply to that posting.
> >
> > Junio, what happened here?
>
> Nothing happened.
>
> I do not recall anybody objecting to, but then when nothing happened in
> neither 1.6.3 nor 1.6.4, nobody jumped up-and-down demanding the change of
> default either. So overall impression I got from this was that nobody
> really cared deeply enough either way.
That's some strange logic right there :-). Of course nobody complained,
they thought it was a done deal.
> But we are talking about 1.7.0 to become a release to correct wrong
> defaults we have had once and for all ;-), and I am tempted to roll this
> topic into the mix. Here is what I queued to my 'next' branch tonight.
OK that's at least something, looking forward to being able to prune
that argument from my scripts. It completely destroys viewability of
larger patchsets.
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-24 8:04 ` [PATCH 0/6] Lazy workqueues Jens Axboe
@ 2009-08-24 9:03 ` Junio C Hamano
2009-08-24 9:11 ` Peter Zijlstra
0 siblings, 1 reply; 28+ messages in thread
From: Junio C Hamano @ 2009-08-24 9:03 UTC (permalink / raw)
To: Jens Axboe
Cc: Peter Zijlstra, linux-kernel, jeff, benh, htejun, bzolnier, alan
Jens Axboe <jens.axboe@oracle.com> writes:
> OK that's at least something, looking forward to being able to prune
> that argument from my scripts.
Ahahh.
An option everybody will want to pass but is prone to be forgotten and
hard to type from the command line is one thing, but if you are scripting
in order to reuse the script over and over, that is a separate story. Is
losing an option from your script really the goal of this fuss?
In any case, you not need to wait for a new version nor a patch at all for
that goal. You can simply add
[sendemail]
chainreplyto = no
to your .git/config (or $HOME/.gitconfig). Both your script and your
command line invocation will default not to create deep threads with the
setting.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PATCH 0/6] Lazy workqueues
2009-08-24 9:03 ` Junio C Hamano
@ 2009-08-24 9:11 ` Peter Zijlstra
0 siblings, 0 replies; 28+ messages in thread
From: Peter Zijlstra @ 2009-08-24 9:11 UTC (permalink / raw)
To: Junio C Hamano
Cc: Jens Axboe, linux-kernel, jeff, benh, htejun, bzolnier, alan
On Mon, 2009-08-24 at 02:03 -0700, Junio C Hamano wrote:
> Jens Axboe <jens.axboe@oracle.com> writes:
>
> > OK that's at least something, looking forward to being able to prune
> > that argument from my scripts.
>
> Ahahh.
>
> An option everybody will want to pass but is prone to be forgotten and
> hard to type from the command line is one thing, but if you are scripting
> in order to reuse the script over and over, that is a separate story. Is
> losing an option from your script really the goal of this fuss?
>
> In any case, you not need to wait for a new version nor a patch at all for
> that goal. You can simply add
>
> [sendemail]
> chainreplyto = no
>
> to your .git/config (or $HOME/.gitconfig). Both your script and your
> command line invocation will default not to create deep threads with the
> setting.
For me its about getting the default right, because lots of people
simply use git-send-email without scrips, and often .gitconfig gets lost
or simply doesn't get carried around the various development machines.
Also, it stop every new person mailing patches from having to be told to
flip that setting.
^ permalink raw reply [flat|nested] 28+ messages in thread
* [PATCH 0/6] Lazy workqueues
@ 2009-08-20 10:17 Jens Axboe
0 siblings, 0 replies; 28+ messages in thread
From: Jens Axboe @ 2009-08-20 10:17 UTC (permalink / raw)
To: linux-kernel; +Cc: jeff, benh, htejun, bzolnier, alan
Hi,
After yesterdays rant on having too many kernel threads and checking
how many I actually have running on this system (531!), I decided to
try and do something about it.
My goal was to retain the workqueue interface instead of coming up with
a new scheme that required conversion (or converting to slow_work which,
btw, is an awful name :-). I also wanted to retain the affinity
guarantees of workqueues as much as possible.
So this is a first step in that direction, it's probably full of races
and holes, but should get the idea across. It adds a
create_lazy_workqueue() helper, similar to the other variants that we
currently have. A lazy workqueue works like a normal workqueue, except
that it only (by default) starts a core thread instead of threads for
all online CPUs. When work is queued on a lazy workqueue for a CPU
that doesn't have a thread running, it will be placed on the core CPUs
list and that will then create and move the work to the right target.
Should task creation fail, the queued work will be executed on the
core CPU instead. Once a lazy workqueue thread has been idle for a
certain amount of time, it will again exit.
The patch boots here and I exercised the rpciod workqueue and
verified that it gets created, runs on the right CPU, and exits a while
later. So core functionality should be there, even if it has holes.
With this patchset, I am now down to 280 kernel threads on one of my test
boxes. Still too many, but it's a start and a net reduction of 251
threads here, or 47%!
The code can also be pulled from:
git://git.kernel.dk/linux-2.6-block.git workqueue
--
Jens Axboe
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2009-08-24 9:12 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-20 10:19 [PATCH 0/6] Lazy workqueues Jens Axboe
2009-08-20 10:19 ` [PATCH 1/6] workqueue: replace singlethread/freezable/rt parameters and variables with flags Jens Axboe
2009-08-20 10:20 ` [PATCH 2/6] workqueue: add support for lazy workqueues Jens Axboe
2009-08-20 12:01 ` Frederic Weisbecker
2009-08-20 12:10 ` Jens Axboe
2009-08-20 10:20 ` [PATCH 3/6] crypto: use " Jens Axboe
2009-08-20 10:20 ` [PATCH 4/6] libata: use lazy workqueues for the pio task Jens Axboe
2009-08-20 12:40 ` Stefan Richter
2009-08-20 12:48 ` Jens Axboe
2009-08-20 10:20 ` [PATCH 5/6] aio: use lazy workqueues Jens Axboe
2009-08-20 15:09 ` Jeff Moyer
2009-08-21 18:31 ` Zach Brown
2009-08-20 10:20 ` [PATCH 6/6] sunrpc: " Jens Axboe
2009-08-20 12:04 ` [PATCH 0/6] Lazy workqueues Peter Zijlstra
2009-08-20 12:08 ` Jens Axboe
2009-08-20 12:16 ` Peter Zijlstra
2009-08-23 2:42 ` Junio C Hamano
2009-08-24 7:04 ` git send-email defaults Peter Zijlstra
2009-08-24 8:04 ` [PATCH 0/6] Lazy workqueues Jens Axboe
2009-08-24 9:03 ` Junio C Hamano
2009-08-24 9:11 ` Peter Zijlstra
2009-08-20 12:22 ` Frederic Weisbecker
2009-08-20 12:41 ` Jens Axboe
2009-08-20 13:04 ` Tejun Heo
2009-08-20 12:59 ` Steven Whitehouse
2009-08-20 12:55 ` Tejun Heo
2009-08-21 6:58 ` Jens Axboe
-- strict thread matches above, loose matches on Subject: below --
2009-08-20 10:17 Jens Axboe
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.