All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2
@ 2014-03-27 17:20 Frederic Weisbecker
  2014-03-27 17:20 ` [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute Frederic Weisbecker
                   ` (4 more replies)
  0 siblings, 5 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-03-27 17:20 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Christoph Lameter, Kevin Hilman,
	Mike Galbraith, Paul E. McKenney, Tejun Heo, Viresh Kumar

Hi,

So this version now handles ordered workqueues as well by changing the
affinity of ordered pools as per Tejun suggestion. See patch 4/4 for
details. I'm just not 100% sure about locking correctness.

I also fixed a few comments in patch 2/4 to better describe unbound mutex
coverage.

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git
	core/workqueue-anon-sysfs-v2

Thanks,
	Frederic
---

Frederic Weisbecker (4):
      workqueue: Move workqueue bus attr to device attribute
      workqueues: Account unbound workqueue in a seperate list
      workqueue: Add anon workqueue sysfs hierarchy
      workqueue: Include ordered workqueues in anon workqueue sysfs interface


 kernel/workqueue.c | 201 ++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 191 insertions(+), 10 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute
  2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
@ 2014-03-27 17:20 ` Frederic Weisbecker
  2014-04-03  7:09   ` Viresh Kumar
  2014-03-27 17:21 ` [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list Frederic Weisbecker
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-03-27 17:20 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Christoph Lameter, Kevin Hilman,
	Mike Galbraith, Paul E. McKenney, Tejun Heo, Viresh Kumar

A workqueue directory implements at least two files: max_active and
per_cpu. Since thse are constant over WQ_SYSFS workqueues, they are
implemented as bus attributes.

Then come the attributes that only belong to unbound workqueues. Those
are implemented as device attribute.

Now we are planning to implement a specific workqueue directory in order
to control the cpu affinity of all non-ABI unbound workqueues at once.
So far we are only interested in the cpumask file on this virtual
workqueue directory: we can't have a single file max_active and per_cpu
for all non ABI unbound workqueues for example. And for now it's not
desirable to share more all-in-one property like cpu affinity.

So we want to be able to create a directory containing only cpumask.
Lets move the constant workqueue attrs as device attribute so that
we can consider skipping them when we want.

Cc: Christoph Lameter <cl@linux.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/workqueue.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 193e977..4d230e3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3133,7 +3133,6 @@ static ssize_t per_cpu_show(struct device *dev, struct device_attribute *attr,
 
 	return scnprintf(buf, PAGE_SIZE, "%d\n", (bool)!(wq->flags & WQ_UNBOUND));
 }
-static DEVICE_ATTR_RO(per_cpu);
 
 static ssize_t max_active_show(struct device *dev,
 			       struct device_attribute *attr, char *buf)
@@ -3156,14 +3155,12 @@ static ssize_t max_active_store(struct device *dev,
 	workqueue_set_max_active(wq, val);
 	return count;
 }
-static DEVICE_ATTR_RW(max_active);
 
-static struct attribute *wq_sysfs_attrs[] = {
-	&dev_attr_per_cpu.attr,
-	&dev_attr_max_active.attr,
-	NULL,
+static struct device_attribute wq_sysfs_default_attrs[] = {
+	__ATTR(per_cpu, 0444, per_cpu_show, NULL),
+	__ATTR(max_active, 0644, max_active_show, max_active_store),
+	__ATTR_NULL,
 };
-ATTRIBUTE_GROUPS(wq_sysfs);
 
 static ssize_t wq_pool_ids_show(struct device *dev,
 				struct device_attribute *attr, char *buf)
@@ -3313,7 +3310,6 @@ static struct device_attribute wq_sysfs_unbound_attrs[] = {
 
 static struct bus_type wq_subsys = {
 	.name				= "workqueue",
-	.dev_groups			= wq_sysfs_groups,
 };
 
 static int __init wq_sysfs_init(void)
@@ -3346,6 +3342,7 @@ static void wq_device_release(struct device *dev)
  */
 int workqueue_sysfs_register(struct workqueue_struct *wq)
 {
+	struct device_attribute *attr;
 	struct wq_device *wq_dev;
 	int ret;
 
@@ -3379,8 +3376,16 @@ int workqueue_sysfs_register(struct workqueue_struct *wq)
 		return ret;
 	}
 
+	for (attr = wq_sysfs_default_attrs; attr->attr.name; attr++) {
+		ret = device_create_file(&wq_dev->dev, attr);
+		if (ret) {
+			device_unregister(&wq_dev->dev);
+			wq->wq_dev = NULL;
+			return ret;
+		}
+	}
+
 	if (wq->flags & WQ_UNBOUND) {
-		struct device_attribute *attr;
 
 		for (attr = wq_sysfs_unbound_attrs; attr->attr.name; attr++) {
 			ret = device_create_file(&wq_dev->dev, attr);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list
  2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
  2014-03-27 17:20 ` [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute Frederic Weisbecker
@ 2014-03-27 17:21 ` Frederic Weisbecker
  2014-03-30 12:57   ` Tejun Heo
  2014-03-27 17:21 ` [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy Frederic Weisbecker
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-03-27 17:21 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Christoph Lameter, Kevin Hilman,
	Mike Galbraith, Paul E. McKenney, Tejun Heo, Viresh Kumar

The workqueues are all listed in a global list protected by a big mutex.
And this big mutex is used in apply_workqueue_attrs() as well.

Now as we plan to implement a directory to control the cpumask of
all non-ABI unbound workqueues, we want to be able to iterate over all
unbound workqueues and call apply_workqueue_attrs() for each of
them with the new cpumask.

But the risk for a deadlock is on the way: we need to iterate the list
of workqueues under wq_pool_mutex. But then apply_workqueue_attrs()
itself calls wq_pool_mutex.

The easiest solution to work around this is to keep track of unbound
workqueues in a separate list with a separate mutex.

Cc: Christoph Lameter <cl@linux.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/workqueue.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4d230e3..8749bef 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -131,6 +131,8 @@ enum {
  *
  * PR: wq_pool_mutex protected for writes.  Sched-RCU protected for reads.
  *
+ * PU: wq_unbound_mutex protected
+ *
  * WQ: wq->mutex protected.
  *
  * WR: wq->mutex protected for writes.  Sched-RCU protected for reads.
@@ -232,6 +234,7 @@ struct wq_device;
 struct workqueue_struct {
 	struct list_head	pwqs;		/* WR: all pwqs of this wq */
 	struct list_head	list;		/* PL: list of all workqueues */
+	struct list_head	unbound_list;	/* PU: list of unbound workqueues */
 
 	struct mutex		mutex;		/* protects this wq */
 	int			work_color;	/* WQ: current work color */
@@ -288,9 +291,11 @@ static bool wq_numa_enabled;		/* unbound NUMA affinity enabled */
 static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
 
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
+static DEFINE_MUTEX(wq_unbound_mutex);	/* protects list of unbound workqueues */
 static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
 
 static LIST_HEAD(workqueues);		/* PL: list of all workqueues */
+static LIST_HEAD(workqueues_unbound);	/* PU: list of unbound workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
 
 /* the per-cpu worker pools */
@@ -4263,6 +4268,12 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt,
 
 	mutex_unlock(&wq_pool_mutex);
 
+	if (wq->flags & WQ_UNBOUND) {
+		mutex_lock(&wq_unbound_mutex);
+		list_add(&wq->unbound_list, &workqueues_unbound);
+		mutex_unlock(&wq_unbound_mutex);
+	}
+
 	return wq;
 
 err_free_wq:
@@ -4318,6 +4329,12 @@ void destroy_workqueue(struct workqueue_struct *wq)
 	list_del_init(&wq->list);
 	mutex_unlock(&wq_pool_mutex);
 
+	if (wq->flags & WQ_UNBOUND) {
+		mutex_lock(&wq_unbound_mutex);
+		list_del(&wq->unbound_list);
+		mutex_unlock(&wq_unbound_mutex);
+	}
+
 	workqueue_sysfs_unregister(wq);
 
 	if (wq->rescuer) {
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
  2014-03-27 17:20 ` [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute Frederic Weisbecker
  2014-03-27 17:21 ` [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list Frederic Weisbecker
@ 2014-03-27 17:21 ` Frederic Weisbecker
  2014-03-30 13:01   ` Tejun Heo
  2014-04-03  7:07   ` Viresh Kumar
  2014-03-27 17:21 ` [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface Frederic Weisbecker
  2014-04-15  9:58 ` [PATCH] workqueue: allow changing attributions of ordered workqueue Lai Jiangshan
  4 siblings, 2 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-03-27 17:21 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Christoph Lameter, Kevin Hilman,
	Mike Galbraith, Paul E. McKenney, Tejun Heo, Viresh Kumar

We call "anon workqueues" the set of unbound workqueues that don't
carry the WQ_SYSFS flag.

They are a problem nowadays because people who work on CPU isolation
(HPC, Real time, etc...) want to be able to migrate all the unbound
workqueues away to a single housekeeping CPU. This control is possible
through sysfs but only with WQ_SYSFS workqueues.

Now we need to deal with the other unbound workqueues. There is two
possible solutions:

1) Implement a sysfs directory for each unbound !WQ_SYSFS. This could
be done with a specific Kconfig to make sure that these workqueue
won't be considered as a stable ABI. But we all know that all distros
will enable this Kconfig symbol and that a warning in the Kconfig help
text won't protect against anything.

2) Implement a single sysfs directory containing only the cpumask file
to the control the affinity of all the !WQ_SYSFS workqueues.

This patch implements the second solution but only for non-ordered
unbound workqueues. Ordered workqueues need a special treatment and
will be handled in a subsequent patch.

Cc: Christoph Lameter <cl@linux.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/workqueue.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 117 insertions(+), 2 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 8749bef..2863c39 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -291,7 +291,8 @@ static bool wq_numa_enabled;		/* unbound NUMA affinity enabled */
 static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
 
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
-static DEFINE_MUTEX(wq_unbound_mutex);	/* protects list of unbound workqueues */
+/* protects list of unbound workqueues and wq_anon_cpumask*/
+static DEFINE_MUTEX(wq_unbound_mutex);
 static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
 
 static LIST_HEAD(workqueues);		/* PL: list of all workqueues */
@@ -3313,13 +3314,127 @@ static struct device_attribute wq_sysfs_unbound_attrs[] = {
 	__ATTR_NULL,
 };
 
+/* Protected by wq_anon_cpumask */
+static cpumask_t wq_anon_cpumask;
+static ssize_t wq_anon_cpumask_show(struct device *dev,
+				    struct device_attribute *attr, char *buf)
+{
+	int written;
+
+	mutex_lock(&wq_unbound_mutex);
+	written = cpumask_scnprintf(buf, PAGE_SIZE, &wq_anon_cpumask);
+	mutex_unlock(&wq_unbound_mutex);
+
+	written += scnprintf(buf + written, PAGE_SIZE - written, "\n");
+
+	return written;
+}
+
+/* Must be called with wq_unbound_mutex held */
+static int wq_anon_cpumask_set(cpumask_var_t cpumask)
+{
+	struct workqueue_attrs *attrs;
+	struct workqueue_struct *wq;
+	int ret;
+
+	list_for_each_entry(wq, &workqueues_unbound, unbound_list) {
+		/* WQ_SYSFS have their own directories */
+		if (wq->flags & WQ_SYSFS)
+			continue;
+		/* Ordered workqueues need specific treatment */
+		if (wq->flags & __WQ_ORDERED)
+			continue;
+
+		attrs = wq_sysfs_prep_attrs(wq);
+		if (!attrs)
+			return -ENOMEM;
+
+		cpumask_copy(attrs->cpumask, cpumask);
+		ret = apply_workqueue_attrs(wq, attrs);
+		free_workqueue_attrs(attrs);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+
+static ssize_t wq_anon_cpumask_store(struct device *dev,
+				     struct device_attribute *attr,
+				     const char *buf, size_t count)
+{
+	cpumask_var_t cpumask;
+	int ret = -EINVAL;
+
+	if (!zalloc_cpumask_var(&cpumask, GFP_KERNEL))
+		return -ENOMEM;
+
+	ret = cpumask_parse(buf, cpumask);
+	if (ret)
+		goto out;
+
+	get_online_cpus();
+	if (cpumask_intersects(cpumask, cpu_online_mask)) {
+		mutex_lock(&wq_unbound_mutex);
+		ret = wq_anon_cpumask_set(cpumask);
+		if (!ret)
+			cpumask_copy(&wq_anon_cpumask, cpumask);
+		mutex_unlock(&wq_unbound_mutex);
+	}
+	put_online_cpus();
+out:
+	free_cpumask_var(cpumask);
+	return ret ? ret : count;
+}
+
+static void device_release(struct device *dev)
+{
+	kfree(dev);
+}
+
+static struct device_attribute wq_sysfs_anon_attr =
+	__ATTR(cpumask, 0644, wq_anon_cpumask_show, wq_anon_cpumask_store);
+
 static struct bus_type wq_subsys = {
 	.name				= "workqueue",
 };
 
 static int __init wq_sysfs_init(void)
 {
-	return subsys_virtual_register(&wq_subsys, NULL);
+	struct device *anon_dev;
+	int ret;
+
+	ret = subsys_virtual_register(&wq_subsys, NULL);
+	if (ret < 0)
+		return ret;
+
+	mutex_lock(&wq_unbound_mutex);
+	cpumask_copy(&wq_anon_cpumask, cpu_possible_mask);
+	mutex_unlock(&wq_unbound_mutex);
+
+	anon_dev = kzalloc(sizeof(*anon_dev), GFP_KERNEL);
+	if (!anon_dev)
+		return -ENOMEM;
+
+	anon_dev->bus = &wq_subsys;
+	anon_dev->init_name = "anon_wqs";
+	anon_dev->release = device_release;
+
+	ret = device_register(anon_dev);
+	if (ret) {
+		kfree(anon_dev);
+		return ret;
+	}
+
+	ret = device_create_file(anon_dev, &wq_sysfs_anon_attr);
+	if (ret) {
+		device_unregister(anon_dev);
+		return ret;
+	}
+
+	kobject_uevent(&anon_dev->kobj, KOBJ_ADD);
+
+	return 0;
 }
 core_initcall(wq_sysfs_init);
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface
  2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
                   ` (2 preceding siblings ...)
  2014-03-27 17:21 ` [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy Frederic Weisbecker
@ 2014-03-27 17:21 ` Frederic Weisbecker
  2014-03-31 12:50   ` Lai Jiangshan
  2014-04-15  9:58 ` [PATCH] workqueue: allow changing attributions of ordered workqueue Lai Jiangshan
  4 siblings, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-03-27 17:21 UTC (permalink / raw)
  To: LKML
  Cc: Frederic Weisbecker, Christoph Lameter, Kevin Hilman,
	Mike Galbraith, Paul E. McKenney, Tejun Heo, Viresh Kumar

Changing the cpu affinity of workqueues through the sysfs interface
is done with apply_workqueue_attrs() by replacing the old pwqs
of a workqueue with new ones tied to worker pools that are affine to the
new cpumask.

We can't do that with ordered workqueues however because the serialization
of their works is enforced by keeping a single worker running. Replacing
it with a new pool of single worker may open a small race window where
two workers could run concurrently during the pwq replacement. Eventually
this can break the ordering guarantee.

So ordered workqueues get a special treatment here. Since they run a
single pwq with a single pool containing a single worker that all
ordered workqueue should share, we simply change this worker affinity
to the new cpumask.

Also just in case this behaviour change a bit in the future and some
ordered workqueues have their private worker, lets iterate the affinity
change over all ordered workqueue pools. This way it works in both cases:
whether a single worker is shared among all ordered workqueue pools or
some of them run their private one.

Suggested-by: Tejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 kernel/workqueue.c | 68 ++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 56 insertions(+), 12 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 2863c39..18807c7 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3330,10 +3330,60 @@ static ssize_t wq_anon_cpumask_show(struct device *dev,
 	return written;
 }
 
-/* Must be called with wq_unbound_mutex held */
-static int wq_anon_cpumask_set(cpumask_var_t cpumask)
+static int wq_ordered_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
+{
+	struct pool_workqueue *pwq;
+	struct worker_pool *pool;
+	struct worker *worker;
+	int ret;
+	int wi;
+
+	mutex_lock(&wq_pool_mutex);
+	pwq = list_first_entry(&wq->pwqs, typeof(*pwq), pwqs_node);
+	pool = pwq->pool;
+
+	mutex_lock(&pool->manager_mutex);
+	for_each_pool_worker(worker, wi, pool) {
+		/* CHECKME: Should we hold pool->lock here? */
+		ret = set_cpus_allowed_ptr(worker->task, cpumask);
+		if (ret)
+			break;
+	}
+	if (!ret) {
+		cpumask_copy(pool->attrs->cpumask, cpumask);
+	}
+	mutex_unlock(&pool->manager_mutex);
+
+	if (!ret) {
+		mutex_lock(&wq->mutex);
+		cpumask_copy(wq->unbound_attrs->cpumask, cpumask);
+		mutex_unlock(&wq->mutex);
+	}
+
+	mutex_unlock(&wq_pool_mutex);
+
+	return ret;
+}
+
+static int wq_anon_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
 {
 	struct workqueue_attrs *attrs;
+	int ret;
+
+	attrs = wq_sysfs_prep_attrs(wq);
+	if (!attrs)
+		return -ENOMEM;
+
+	cpumask_copy(attrs->cpumask, cpumask);
+	ret = apply_workqueue_attrs(wq, attrs);
+	free_workqueue_attrs(attrs);
+
+	return ret;
+}
+
+/* Must be called with wq_unbound_mutex held */
+static int wq_anon_cpumask_set_all(cpumask_var_t cpumask)
+{
 	struct workqueue_struct *wq;
 	int ret;
 
@@ -3343,15 +3393,9 @@ static int wq_anon_cpumask_set(cpumask_var_t cpumask)
 			continue;
 		/* Ordered workqueues need specific treatment */
 		if (wq->flags & __WQ_ORDERED)
-			continue;
-
-		attrs = wq_sysfs_prep_attrs(wq);
-		if (!attrs)
-			return -ENOMEM;
-
-		cpumask_copy(attrs->cpumask, cpumask);
-		ret = apply_workqueue_attrs(wq, attrs);
-		free_workqueue_attrs(attrs);
+			ret = wq_ordered_cpumask_set(wq, cpumask);
+		else
+			ret = wq_anon_cpumask_set(wq, cpumask);
 		if (ret)
 			break;
 	}
@@ -3376,7 +3420,7 @@ static ssize_t wq_anon_cpumask_store(struct device *dev,
 	get_online_cpus();
 	if (cpumask_intersects(cpumask, cpu_online_mask)) {
 		mutex_lock(&wq_unbound_mutex);
-		ret = wq_anon_cpumask_set(cpumask);
+		ret = wq_anon_cpumask_set_all(cpumask);
 		if (!ret)
 			cpumask_copy(&wq_anon_cpumask, cpumask);
 		mutex_unlock(&wq_unbound_mutex);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list
  2014-03-27 17:21 ` [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list Frederic Weisbecker
@ 2014-03-30 12:57   ` Tejun Heo
  2014-04-03 14:48     ` Frederic Weisbecker
  0 siblings, 1 reply; 24+ messages in thread
From: Tejun Heo @ 2014-03-30 12:57 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Thu, Mar 27, 2014 at 06:21:00PM +0100, Frederic Weisbecker wrote:
> The workqueues are all listed in a global list protected by a big mutex.
> And this big mutex is used in apply_workqueue_attrs() as well.
> 
> Now as we plan to implement a directory to control the cpumask of
> all non-ABI unbound workqueues, we want to be able to iterate over all
> unbound workqueues and call apply_workqueue_attrs() for each of
> them with the new cpumask.
> 
> But the risk for a deadlock is on the way: we need to iterate the list
> of workqueues under wq_pool_mutex. But then apply_workqueue_attrs()
> itself calls wq_pool_mutex.

Wouldn't the right thing to do would be factoring out
apply_workqueue_attrs_locked()?  It's cleaner to block out addition of
new workqueues while the masks are being updated anyway.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-03-27 17:21 ` [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy Frederic Weisbecker
@ 2014-03-30 13:01   ` Tejun Heo
  2014-04-03 14:42     ` Frederic Weisbecker
  2014-04-03  7:07   ` Viresh Kumar
  1 sibling, 1 reply; 24+ messages in thread
From: Tejun Heo @ 2014-03-30 13:01 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Thu, Mar 27, 2014 at 06:21:01PM +0100, Frederic Weisbecker wrote:
> We call "anon workqueues" the set of unbound workqueues that don't
> carry the WQ_SYSFS flag.
> 
> They are a problem nowadays because people who work on CPU isolation
> (HPC, Real time, etc...) want to be able to migrate all the unbound
> workqueues away to a single housekeeping CPU. This control is possible
> through sysfs but only with WQ_SYSFS workqueues.
> 
> Now we need to deal with the other unbound workqueues. There is two
> possible solutions:
> 
> 1) Implement a sysfs directory for each unbound !WQ_SYSFS. This could
> be done with a specific Kconfig to make sure that these workqueue
> won't be considered as a stable ABI. But we all know that all distros
> will enable this Kconfig symbol and that a warning in the Kconfig help
> text won't protect against anything.
> 
> 2) Implement a single sysfs directory containing only the cpumask file
> to the control the affinity of all the !WQ_SYSFS workqueues.
> 
> This patch implements the second solution but only for non-ordered
> unbound workqueues. Ordered workqueues need a special treatment and
> will be handled in a subsequent patch.

I'm not really sure this is the good approach.  I think I wrote this
way back but wouldn't it make more sense to allow userland to restrict
the cpus which are allowed to all unbound cpus.  As currently
implemented, setting WQ_SYSFS to give userland more control would
escape that workqueue from this global mask as a side effect, which is
a surprising behavior and doesn't make much sense to me.  I think it
would make far more sense to implement a knob which controls which
cpus are available to *all* unbound workqueue including the default
fallback one.  That way it's way more consistent and I'm pretty sure
the code would be fairly simple too.  All it needs to do is
restricting the online cpus that unbound workqueues see.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface
  2014-03-27 17:21 ` [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface Frederic Weisbecker
@ 2014-03-31 12:50   ` Lai Jiangshan
  2014-03-31 13:15     ` Lai Jiangshan
  0 siblings, 1 reply; 24+ messages in thread
From: Lai Jiangshan @ 2014-03-31 12:50 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

On 03/28/2014 01:21 AM, Frederic Weisbecker wrote:
> Changing the cpu affinity of workqueues through the sysfs interface
> is done with apply_workqueue_attrs() by replacing the old pwqs
> of a workqueue with new ones tied to worker pools that are affine to the
> new cpumask.
> 
> We can't do that with ordered workqueues however because the serialization
> of their works is enforced by keeping a single worker running. Replacing
> it with a new pool of single worker may open a small race window where
> two workers could run concurrently during the pwq replacement. Eventually
> this can break the ordering guarantee.
> 
> So ordered workqueues get a special treatment here. Since they run a
> single pwq with a single pool containing a single worker that all
> ordered workqueue should share, we simply change this worker affinity
> to the new cpumask.
> 
> Also just in case this behaviour change a bit in the future and some
> ordered workqueues have their private worker, lets iterate the affinity
> change over all ordered workqueue pools. This way it works in both cases:
> whether a single worker is shared among all ordered workqueue pools or
> some of them run their private one.
> 
> Suggested-by: Tejun Heo <tj@kernel.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Kevin Hilman <khilman@linaro.org>
> Cc: Mike Galbraith <bitbucket@online.de>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Viresh Kumar <viresh.kumar@linaro.org>
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> ---
>  kernel/workqueue.c | 68 ++++++++++++++++++++++++++++++++++++++++++++----------
>  1 file changed, 56 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 2863c39..18807c7 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -3330,10 +3330,60 @@ static ssize_t wq_anon_cpumask_show(struct device *dev,
>  	return written;
>  }
>  
> -/* Must be called with wq_unbound_mutex held */
> -static int wq_anon_cpumask_set(cpumask_var_t cpumask)
> +static int wq_ordered_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
> +{
> +	struct pool_workqueue *pwq;
> +	struct worker_pool *pool;
> +	struct worker *worker;
> +	int ret;
> +	int wi;
> +
> +	mutex_lock(&wq_pool_mutex);
> +	pwq = list_first_entry(&wq->pwqs, typeof(*pwq), pwqs_node);
> +	pool = pwq->pool;
> +
> +	mutex_lock(&pool->manager_mutex);
> +	for_each_pool_worker(worker, wi, pool) {
> +		/* CHECKME: Should we hold pool->lock here? */

Don't need to hold pool->lock which is a spinlock.
pool->manager_mutex is enough to apply cpumask setting
(but we only do it in cpuhotplug callbacks now).

> +		ret = set_cpus_allowed_ptr(worker->task, cpumask);

no.

even the pwq(of ordered wq) shares the pool with other unbound pwq.
we shouldn't change the worker's attr.

Although we can change the ordered wq to use dedicated pool.
But if we do so, I would recommend to remove the ordered wq from workqueue.c totally
and ask the callers use queue_kthread_work() instead.

(dedicated or any)pool requires two workers at least, and requires much
more other resource.

> +		if (ret)
> +			break;
> +	}
> +	if (!ret) {
> +		cpumask_copy(pool->attrs->cpumask, cpumask);

pool->attr should be constant (except initialization/destruction).

> +	}
> +	mutex_unlock(&pool->manager_mutex);
> +
> +	if (!ret) {
> +		mutex_lock(&wq->mutex);
> +		cpumask_copy(wq->unbound_attrs->cpumask, cpumask);
> +		mutex_unlock(&wq->mutex);
> +	}
> +
> +	mutex_unlock(&wq_pool_mutex);
> +
> +	return ret;
> +}
> +
> +static int wq_anon_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
>  {
>  	struct workqueue_attrs *attrs;
> +	int ret;
> +
> +	attrs = wq_sysfs_prep_attrs(wq);
> +	if (!attrs)
> +		return -ENOMEM;
> +
> +	cpumask_copy(attrs->cpumask, cpumask);
> +	ret = apply_workqueue_attrs(wq, attrs);
> +	free_workqueue_attrs(attrs);
> +
> +	return ret;
> +}
> +
> +/* Must be called with wq_unbound_mutex held */
> +static int wq_anon_cpumask_set_all(cpumask_var_t cpumask)
> +{
>  	struct workqueue_struct *wq;
>  	int ret;
>  
> @@ -3343,15 +3393,9 @@ static int wq_anon_cpumask_set(cpumask_var_t cpumask)
>  			continue;
>  		/* Ordered workqueues need specific treatment */
>  		if (wq->flags & __WQ_ORDERED)
> -			continue;
> -
> -		attrs = wq_sysfs_prep_attrs(wq);
> -		if (!attrs)
> -			return -ENOMEM;
> -
> -		cpumask_copy(attrs->cpumask, cpumask);
> -		ret = apply_workqueue_attrs(wq, attrs);
> -		free_workqueue_attrs(attrs);
> +			ret = wq_ordered_cpumask_set(wq, cpumask);
> +		else
> +			ret = wq_anon_cpumask_set(wq, cpumask);
>  		if (ret)
>  			break;
>  	}
> @@ -3376,7 +3420,7 @@ static ssize_t wq_anon_cpumask_store(struct device *dev,
>  	get_online_cpus();
>  	if (cpumask_intersects(cpumask, cpu_online_mask)) {
>  		mutex_lock(&wq_unbound_mutex);
> -		ret = wq_anon_cpumask_set(cpumask);
> +		ret = wq_anon_cpumask_set_all(cpumask);
>  		if (!ret)
>  			cpumask_copy(&wq_anon_cpumask, cpumask);
>  		mutex_unlock(&wq_unbound_mutex);


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface
  2014-03-31 12:50   ` Lai Jiangshan
@ 2014-03-31 13:15     ` Lai Jiangshan
  2014-04-03 15:59       ` Frederic Weisbecker
  0 siblings, 1 reply; 24+ messages in thread
From: Lai Jiangshan @ 2014-03-31 13:15 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

On 03/31/2014 08:50 PM, Lai Jiangshan wrote:
> On 03/28/2014 01:21 AM, Frederic Weisbecker wrote:
>> Changing the cpu affinity of workqueues through the sysfs interface
>> is done with apply_workqueue_attrs() by replacing the old pwqs
>> of a workqueue with new ones tied to worker pools that are affine to the
>> new cpumask.
>>
>> We can't do that with ordered workqueues however because the serialization
>> of their works is enforced by keeping a single worker running. Replacing
>> it with a new pool of single worker may open a small race window where
>> two workers could run concurrently during the pwq replacement. Eventually
>> this can break the ordering guarantee.
>>
>> So ordered workqueues get a special treatment here. Since they run a
>> single pwq with a single pool containing a single worker that all
>> ordered workqueue should share, we simply change this worker affinity
>> to the new cpumask.
>>
>> Also just in case this behaviour change a bit in the future and some
>> ordered workqueues have their private worker, lets iterate the affinity
>> change over all ordered workqueue pools. This way it works in both cases:
>> whether a single worker is shared among all ordered workqueue pools or
>> some of them run their private one.
>>
>> Suggested-by: Tejun Heo <tj@kernel.org>
>> Cc: Christoph Lameter <cl@linux.com>
>> Cc: Kevin Hilman <khilman@linaro.org>
>> Cc: Mike Galbraith <bitbucket@online.de>
>> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>> Cc: Tejun Heo <tj@kernel.org>
>> Cc: Viresh Kumar <viresh.kumar@linaro.org>
>> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
>> ---
>>  kernel/workqueue.c | 68 ++++++++++++++++++++++++++++++++++++++++++++----------
>>  1 file changed, 56 insertions(+), 12 deletions(-)
>>
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index 2863c39..18807c7 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -3330,10 +3330,60 @@ static ssize_t wq_anon_cpumask_show(struct device *dev,
>>  	return written;
>>  }
>>  
>> -/* Must be called with wq_unbound_mutex held */
>> -static int wq_anon_cpumask_set(cpumask_var_t cpumask)
>> +static int wq_ordered_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
>> +{
>> +	struct pool_workqueue *pwq;
>> +	struct worker_pool *pool;
>> +	struct worker *worker;
>> +	int ret;
>> +	int wi;
>> +
>> +	mutex_lock(&wq_pool_mutex);
>> +	pwq = list_first_entry(&wq->pwqs, typeof(*pwq), pwqs_node);
>> +	pool = pwq->pool;
>> +
>> +	mutex_lock(&pool->manager_mutex);
>> +	for_each_pool_worker(worker, wi, pool) {
>> +		/* CHECKME: Should we hold pool->lock here? */
> 
> Don't need to hold pool->lock which is a spinlock.
> pool->manager_mutex is enough to apply cpumask setting
> (but we only do it in cpuhotplug callbacks now).
> 
>> +		ret = set_cpus_allowed_ptr(worker->task, cpumask);
> 
> no.
> 
> even the pwq(of ordered wq) shares the pool with other unbound pwq.
> we shouldn't change the worker's attr.


Sorry, I'm wrong.
Tejun had told there is only one default worker pool for ordered workqueues.

It is true. But this pool may share with other non-ordered workqueues which
maybe have WQ_SYSFS. we should split this pool into two pools.
one for ordered wqs, one for the rest.

> 
> Although we can change the ordered wq to use dedicated pool.
> But if we do so, I would recommend to remove the ordered wq from workqueue.c totally
> and ask the callers use queue_kthread_work() instead.
> 
> (dedicated or any)pool requires two workers at least, and requires much
> more other resource.
> 
>> +		if (ret)
>> +			break;
>> +	}
>> +	if (!ret) {
>> +		cpumask_copy(pool->attrs->cpumask, cpumask);
> 
> pool->attr should be constant (except initialization/destruction).
> 
>> +	}
>> +	mutex_unlock(&pool->manager_mutex);
>> +
>> +	if (!ret) {
>> +		mutex_lock(&wq->mutex);
>> +		cpumask_copy(wq->unbound_attrs->cpumask, cpumask);
>> +		mutex_unlock(&wq->mutex);
>> +	}
>> +
>> +	mutex_unlock(&wq_pool_mutex);
>> +
>> +	return ret;
>> +}
>> +
>> +static int wq_anon_cpumask_set(struct workqueue_struct *wq, cpumask_var_t cpumask)
>>  {
>>  	struct workqueue_attrs *attrs;
>> +	int ret;
>> +
>> +	attrs = wq_sysfs_prep_attrs(wq);
>> +	if (!attrs)
>> +		return -ENOMEM;
>> +
>> +	cpumask_copy(attrs->cpumask, cpumask);
>> +	ret = apply_workqueue_attrs(wq, attrs);
>> +	free_workqueue_attrs(attrs);
>> +
>> +	return ret;
>> +}
>> +
>> +/* Must be called with wq_unbound_mutex held */
>> +static int wq_anon_cpumask_set_all(cpumask_var_t cpumask)
>> +{
>>  	struct workqueue_struct *wq;
>>  	int ret;
>>  
>> @@ -3343,15 +3393,9 @@ static int wq_anon_cpumask_set(cpumask_var_t cpumask)
>>  			continue;
>>  		/* Ordered workqueues need specific treatment */
>>  		if (wq->flags & __WQ_ORDERED)
>> -			continue;
>> -
>> -		attrs = wq_sysfs_prep_attrs(wq);
>> -		if (!attrs)
>> -			return -ENOMEM;
>> -
>> -		cpumask_copy(attrs->cpumask, cpumask);
>> -		ret = apply_workqueue_attrs(wq, attrs);
>> -		free_workqueue_attrs(attrs);
>> +			ret = wq_ordered_cpumask_set(wq, cpumask);



After the pool is split. wq_ordered_cpumask_set() should only be called once.
once is enough for all ordered wqs.


>> +		else
>> +			ret = wq_anon_cpumask_set(wq, cpumask);
>>  		if (ret)
>>  			break;
>>  	}
>> @@ -3376,7 +3420,7 @@ static ssize_t wq_anon_cpumask_store(struct device *dev,
>>  	get_online_cpus();
>>  	if (cpumask_intersects(cpumask, cpu_online_mask)) {
>>  		mutex_lock(&wq_unbound_mutex);
>> -		ret = wq_anon_cpumask_set(cpumask);
>> +		ret = wq_anon_cpumask_set_all(cpumask);
>>  		if (!ret)
>>  			cpumask_copy(&wq_anon_cpumask, cpumask);
>>  		mutex_unlock(&wq_unbound_mutex);
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-03-27 17:21 ` [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy Frederic Weisbecker
  2014-03-30 13:01   ` Tejun Heo
@ 2014-04-03  7:07   ` Viresh Kumar
  1 sibling, 0 replies; 24+ messages in thread
From: Viresh Kumar @ 2014-04-03  7:07 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo

On 27 March 2014 22:51, Frederic Weisbecker <fweisbec@gmail.com> wrote:
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c

>  static int __init wq_sysfs_init(void)
>  {
> -       return subsys_virtual_register(&wq_subsys, NULL);
> +       struct device *anon_dev;
> +       int ret;
> +
> +       ret = subsys_virtual_register(&wq_subsys, NULL);
> +       if (ret < 0)
> +               return ret;
> +
> +       mutex_lock(&wq_unbound_mutex);
> +       cpumask_copy(&wq_anon_cpumask, cpu_possible_mask);
> +       mutex_unlock(&wq_unbound_mutex);
> +
> +       anon_dev = kzalloc(sizeof(*anon_dev), GFP_KERNEL);
> +       if (!anon_dev)
> +               return -ENOMEM;
> +
> +       anon_dev->bus = &wq_subsys;
> +       anon_dev->init_name = "anon_wqs";
> +       anon_dev->release = device_release;
> +
> +       ret = device_register(anon_dev);
> +       if (ret) {
> +               kfree(anon_dev);
> +               return ret;
> +       }
> +
> +       ret = device_create_file(anon_dev, &wq_sysfs_anon_attr);
> +       if (ret) {
> +               device_unregister(anon_dev);

kfree(anon_dev) ??

> +               return ret;
> +       }
> +
> +       kobject_uevent(&anon_dev->kobj, KOBJ_ADD);
> +
> +       return 0;
>  }
>  core_initcall(wq_sysfs_init);

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute
  2014-03-27 17:20 ` [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute Frederic Weisbecker
@ 2014-04-03  7:09   ` Viresh Kumar
  2014-04-24 13:48     ` Frederic Weisbecker
  0 siblings, 1 reply; 24+ messages in thread
From: Viresh Kumar @ 2014-04-03  7:09 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo

Nothing much, just some nitpicks :)

On 27 March 2014 22:50, Frederic Weisbecker <fweisbec@gmail.com> wrote:
> A workqueue directory implements at least two files: max_active and
> per_cpu. Since thse are constant over WQ_SYSFS workqueues, they are

s/thse/these

> implemented as bus attributes.

> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 193e977..4d230e3 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -3133,7 +3133,6 @@ static ssize_t per_cpu_show(struct device *dev, struct device_attribute *attr,
>
>         return scnprintf(buf, PAGE_SIZE, "%d\n", (bool)!(wq->flags & WQ_UNBOUND));
>  }
> -static DEVICE_ATTR_RO(per_cpu);
>
>  static ssize_t max_active_show(struct device *dev,
>                                struct device_attribute *attr, char *buf)
> @@ -3156,14 +3155,12 @@ static ssize_t max_active_store(struct device *dev,
>         workqueue_set_max_active(wq, val);
>         return count;
>  }
> -static DEVICE_ATTR_RW(max_active);
>
> -static struct attribute *wq_sysfs_attrs[] = {
> -       &dev_attr_per_cpu.attr,
> -       &dev_attr_max_active.attr,
> -       NULL,
> +static struct device_attribute wq_sysfs_default_attrs[] = {
> +       __ATTR(per_cpu, 0444, per_cpu_show, NULL),
> +       __ATTR(max_active, 0644, max_active_show, max_active_store),
> +       __ATTR_NULL,
>  };
> -ATTRIBUTE_GROUPS(wq_sysfs);
>
>  static ssize_t wq_pool_ids_show(struct device *dev,
>                                 struct device_attribute *attr, char *buf)
> @@ -3313,7 +3310,6 @@ static struct device_attribute wq_sysfs_unbound_attrs[] = {
>
>  static struct bus_type wq_subsys = {
>         .name                           = "workqueue",
> -       .dev_groups                     = wq_sysfs_groups,
>  };
>
>  static int __init wq_sysfs_init(void)
> @@ -3346,6 +3342,7 @@ static void wq_device_release(struct device *dev)
>   */
>  int workqueue_sysfs_register(struct workqueue_struct *wq)
>  {
> +       struct device_attribute *attr;
>         struct wq_device *wq_dev;
>         int ret;
>
> @@ -3379,8 +3376,16 @@ int workqueue_sysfs_register(struct workqueue_struct *wq)
>                 return ret;
>         }
>
> +       for (attr = wq_sysfs_default_attrs; attr->attr.name; attr++) {
> +               ret = device_create_file(&wq_dev->dev, attr);
> +               if (ret) {
> +                       device_unregister(&wq_dev->dev);
> +                       wq->wq_dev = NULL;
> +                       return ret;
> +               }
> +       }

Exactly same as below loop, probably create a routine for this?

>         if (wq->flags & WQ_UNBOUND) {
> -               struct device_attribute *attr;
>

probably remove this blank line as well..

>                 for (attr = wq_sysfs_unbound_attrs; attr->attr.name; attr++) {
>                         ret = device_create_file(&wq_dev->dev, attr);
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-03-30 13:01   ` Tejun Heo
@ 2014-04-03 14:42     ` Frederic Weisbecker
  2014-04-03 14:58       ` Tejun Heo
  0 siblings, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-03 14:42 UTC (permalink / raw)
  To: Tejun Heo
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Sun, Mar 30, 2014 at 09:01:39AM -0400, Tejun Heo wrote:
> On Thu, Mar 27, 2014 at 06:21:01PM +0100, Frederic Weisbecker wrote:
> > We call "anon workqueues" the set of unbound workqueues that don't
> > carry the WQ_SYSFS flag.
> > 
> > They are a problem nowadays because people who work on CPU isolation
> > (HPC, Real time, etc...) want to be able to migrate all the unbound
> > workqueues away to a single housekeeping CPU. This control is possible
> > through sysfs but only with WQ_SYSFS workqueues.
> > 
> > Now we need to deal with the other unbound workqueues. There is two
> > possible solutions:
> > 
> > 1) Implement a sysfs directory for each unbound !WQ_SYSFS. This could
> > be done with a specific Kconfig to make sure that these workqueue
> > won't be considered as a stable ABI. But we all know that all distros
> > will enable this Kconfig symbol and that a warning in the Kconfig help
> > text won't protect against anything.
> > 
> > 2) Implement a single sysfs directory containing only the cpumask file
> > to the control the affinity of all the !WQ_SYSFS workqueues.
> > 
> > This patch implements the second solution but only for non-ordered
> > unbound workqueues. Ordered workqueues need a special treatment and
> > will be handled in a subsequent patch.
> 
> I'm not really sure this is the good approach.  I think I wrote this
> way back but wouldn't it make more sense to allow userland to restrict
> the cpus which are allowed to all unbound cpus.  As currently
> implemented, setting WQ_SYSFS to give userland more control would
> escape that workqueue from this global mask as a side effect, which is
> a surprising behavior and doesn't make much sense to me.

I just considered that anon workqueues shouldn't be that different from
another WQ_SYSFS workqueue. This way we don't have suprising side effect.
Touching a WQ_SYSFS doesn't impact anon workqueues, and touching anon workqueues
doesn't impact WQ_SYSFS workqueues.

In fact this is simply the current way we do it, just extended.

But anyway your solution looks more simple.

> I think it would make far more sense to implement a knob which controls which
> cpus are available to *all* unbound workqueue including the default
> fallback one.  That way it's way more consistent and I'm pretty sure
> the code would be fairly simple too.  All it needs to do is
> restricting the online cpus that unbound workqueues see.

Yeah I like this. So the right place for this cpumask would be in the root of
/sys/devices/virtual/workqueue/ , right?

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list
  2014-03-30 12:57   ` Tejun Heo
@ 2014-04-03 14:48     ` Frederic Weisbecker
  2014-04-03 15:01       ` Tejun Heo
  0 siblings, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-03 14:48 UTC (permalink / raw)
  To: Tejun Heo
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Sun, Mar 30, 2014 at 08:57:51AM -0400, Tejun Heo wrote:
> On Thu, Mar 27, 2014 at 06:21:00PM +0100, Frederic Weisbecker wrote:
> > The workqueues are all listed in a global list protected by a big mutex.
> > And this big mutex is used in apply_workqueue_attrs() as well.
> > 
> > Now as we plan to implement a directory to control the cpumask of
> > all non-ABI unbound workqueues, we want to be able to iterate over all
> > unbound workqueues and call apply_workqueue_attrs() for each of
> > them with the new cpumask.
> > 
> > But the risk for a deadlock is on the way: we need to iterate the list
> > of workqueues under wq_pool_mutex. But then apply_workqueue_attrs()
> > itself calls wq_pool_mutex.
> 
> Wouldn't the right thing to do would be factoring out
> apply_workqueue_attrs_locked()?  It's cleaner to block out addition of
> new workqueues while the masks are being updated anyway.

I'm not quite sure I get what you suggest. Do you mean have apply_workqueue_attrs_locked()
calling apply_workqueue_attrs() under the lock on this patch?

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-04-03 14:42     ` Frederic Weisbecker
@ 2014-04-03 14:58       ` Tejun Heo
  2014-04-03 15:05         ` Frederic Weisbecker
  0 siblings, 1 reply; 24+ messages in thread
From: Tejun Heo @ 2014-04-03 14:58 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

Hello, Frederic.

On Thu, Apr 03, 2014 at 04:42:55PM +0200, Frederic Weisbecker wrote:
> > I'm not really sure this is the good approach.  I think I wrote this
> > way back but wouldn't it make more sense to allow userland to restrict
> > the cpus which are allowed to all unbound cpus.  As currently
> > implemented, setting WQ_SYSFS to give userland more control would
> > escape that workqueue from this global mask as a side effect, which is
> > a surprising behavior and doesn't make much sense to me.
> 
> I just considered that anon workqueues shouldn't be that different from
> another WQ_SYSFS workqueue. This way we don't have suprising side effect.
> Touching a WQ_SYSFS doesn't impact anon workqueues, and touching anon workqueues
> doesn't impact WQ_SYSFS workqueues.

I really think it'd be a lot better to perceive the default attributes
to be layered below explicit WQ_SYSFS attributes; otherwise, we have
two disjoint sets and the workqueues would jump between the two
depending on WQ_SYSFS.  A system tool which wants to configure all
workqueues would have to scan and manipulate all of them not knowing
what specific one's requirements are and tools which want to configure
specific ones likely won't know what the overruling condition is and
violate the global contraints.  It'd be clearer to have the layering
pre-defined and enforced.

> In fact this is simply the current way we do it, just extended.

Yes, in term of mechanics, it is but I don't think that's what we
want.  We want to be able to say "unbound workqueues in general are
confined to these cpus" and it's weird to just provide knobs for wqs
which don't have knobs.

> Yeah I like this. So the right place for this cpumask would be in
> the root of /sys/devices/virtual/workqueue/ , right?

Yes, I think that'd make more sense.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list
  2014-04-03 14:48     ` Frederic Weisbecker
@ 2014-04-03 15:01       ` Tejun Heo
  2014-04-03 15:43         ` Frederic Weisbecker
  0 siblings, 1 reply; 24+ messages in thread
From: Tejun Heo @ 2014-04-03 15:01 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

Hello,

On Thu, Apr 03, 2014 at 04:48:28PM +0200, Frederic Weisbecker wrote:
> > Wouldn't the right thing to do would be factoring out
> > apply_workqueue_attrs_locked()?  It's cleaner to block out addition of
> > new workqueues while the masks are being updated anyway.
> 
> I'm not quite sure I get what you suggest. Do you mean have
> apply_workqueue_attrs_locked() calling apply_workqueue_attrs() under
> the lock on this patch?

Not sure it still matters but I was suggesting that creating
apply_workqueue_attrs_locked() which requires the caller to handle
locking and making apply_workqueue_attrs() a wrapper which grabs and
releases lock around it, and using the former in locked iteration
would work.  lol has this explanation made it any clearer or is it
even worse now?  :)

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy
  2014-04-03 14:58       ` Tejun Heo
@ 2014-04-03 15:05         ` Frederic Weisbecker
  0 siblings, 0 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-03 15:05 UTC (permalink / raw)
  To: Tejun Heo
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Thu, Apr 03, 2014 at 10:58:05AM -0400, Tejun Heo wrote:
> Hello, Frederic.
> 
> On Thu, Apr 03, 2014 at 04:42:55PM +0200, Frederic Weisbecker wrote:
> > > I'm not really sure this is the good approach.  I think I wrote this
> > > way back but wouldn't it make more sense to allow userland to restrict
> > > the cpus which are allowed to all unbound cpus.  As currently
> > > implemented, setting WQ_SYSFS to give userland more control would
> > > escape that workqueue from this global mask as a side effect, which is
> > > a surprising behavior and doesn't make much sense to me.
> > 
> > I just considered that anon workqueues shouldn't be that different from
> > another WQ_SYSFS workqueue. This way we don't have suprising side effect.
> > Touching a WQ_SYSFS doesn't impact anon workqueues, and touching anon workqueues
> > doesn't impact WQ_SYSFS workqueues.
> 
> I really think it'd be a lot better to perceive the default attributes
> to be layered below explicit WQ_SYSFS attributes; otherwise, we have
> two disjoint sets and the workqueues would jump between the two
> depending on WQ_SYSFS.  A system tool which wants to configure all
> workqueues would have to scan and manipulate all of them not knowing
> what specific one's requirements are and tools which want to configure
> specific ones likely won't know what the overruling condition is and
> violate the global contraints.  It'd be clearer to have the layering
> pre-defined and enforced.

Ok, works for me.

> 
> > In fact this is simply the current way we do it, just extended.
> 
> Yes, in term of mechanics, it is but I don't think that's what we
> want.  We want to be able to say "unbound workqueues in general are
> confined to these cpus" and it's weird to just provide knobs for wqs
> which don't have knobs.

Yeah I like the simplicity of that.

> 
> > Yeah I like this. So the right place for this cpumask would be in
> > the root of /sys/devices/virtual/workqueue/ , right?
> 
> Yes, I think that'd make more sense.

Ok, I'll try this. Thanks!

> 
> Thanks.
> 
> -- 
> tejun

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list
  2014-04-03 15:01       ` Tejun Heo
@ 2014-04-03 15:43         ` Frederic Weisbecker
  0 siblings, 0 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-03 15:43 UTC (permalink / raw)
  To: Tejun Heo
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Viresh Kumar

On Thu, Apr 03, 2014 at 11:01:28AM -0400, Tejun Heo wrote:
> Hello,
> 
> On Thu, Apr 03, 2014 at 04:48:28PM +0200, Frederic Weisbecker wrote:
> > > Wouldn't the right thing to do would be factoring out
> > > apply_workqueue_attrs_locked()?  It's cleaner to block out addition of
> > > new workqueues while the masks are being updated anyway.
> > 
> > I'm not quite sure I get what you suggest. Do you mean have
> > apply_workqueue_attrs_locked() calling apply_workqueue_attrs() under
> > the lock on this patch?
> 
> Not sure it still matters but I was suggesting that creating
> apply_workqueue_attrs_locked() which requires the caller to handle
> locking and making apply_workqueue_attrs() a wrapper which grabs and
> releases lock around it, and using the former in locked iteration
> would work.  lol has this explanation made it any clearer or is it
> even worse now?  :)

I see, it gets a little better now :)

Maybe it still matters because I still need to iterate over unbound
workqueues to apply an update on "cpu_unbound_wqs_mask". And the list must remain
stable while I call apply_workqueue_attrs() on workqueues.

Anyway, we'll see how it looks like :)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface
  2014-03-31 13:15     ` Lai Jiangshan
@ 2014-04-03 15:59       ` Frederic Weisbecker
  0 siblings, 0 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-03 15:59 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

On Mon, Mar 31, 2014 at 09:15:26PM +0800, Lai Jiangshan wrote:
> On 03/31/2014 08:50 PM, Lai Jiangshan wrote:
> 
> Sorry, I'm wrong.
> Tejun had told there is only one default worker pool for ordered workqueues.
> 
> It is true. But this pool may share with other non-ordered workqueues which
> maybe have WQ_SYSFS. we should split this pool into two pools.
> one for ordered wqs, one for the rest.

Ah good point, there is a side effect here. Ok the idea of a low level unbound cpumask
should solve that issue as we want every unbound workqueues to be concerned.

[...]
> >> +/* Must be called with wq_unbound_mutex held */
> >> +static int wq_anon_cpumask_set_all(cpumask_var_t cpumask)
> >> +{
> >>  	struct workqueue_struct *wq;
> >>  	int ret;
> >>  
> >> @@ -3343,15 +3393,9 @@ static int wq_anon_cpumask_set(cpumask_var_t cpumask)
> >>  			continue;
> >>  		/* Ordered workqueues need specific treatment */
> >>  		if (wq->flags & __WQ_ORDERED)
> >> -			continue;
> >> -
> >> -		attrs = wq_sysfs_prep_attrs(wq);
> >> -		if (!attrs)
> >> -			return -ENOMEM;
> >> -
> >> -		cpumask_copy(attrs->cpumask, cpumask);
> >> -		ret = apply_workqueue_attrs(wq, attrs);
> >> -		free_workqueue_attrs(attrs);
> >> +			ret = wq_ordered_cpumask_set(wq, cpumask);
> 
> 
> 
> After the pool is split. wq_ordered_cpumask_set() should only be called once.
> once is enough for all ordered wqs.

True. I was just worried that the implementation changes in the future and some
ordered unbound workqueues get their own private pool in case we have problems
with works of a workqueue blocking works of another workqueue while they wait on
heavy resources.

The workqueue subsystem usually deals with that by launching new workers on demand
to handle waiting works in the queue. But it doesn't seem to apply on ordered unbound
workqueues because they have max_active=1, so they can't spawn another worker if there
is a work blocking the others. So I wouldn't be surprised if such an issue is resolved
by using private pools on some ordered workqueues.

IIUC.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH] workqueue: allow changing attributions of ordered workqueue
  2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
                   ` (3 preceding siblings ...)
  2014-03-27 17:21 ` [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface Frederic Weisbecker
@ 2014-04-15  9:58 ` Lai Jiangshan
  2014-04-15 12:25   ` Frederic Weisbecker
  2014-04-23  0:04   ` Frederic Weisbecker
  4 siblings, 2 replies; 24+ messages in thread
From: Lai Jiangshan @ 2014-04-15  9:58 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar, Lai Jiangshan

>From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
From: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Tue, 15 Apr 2014 17:52:19 +0800
Subject: [PATCH] workqueue: allow changing attributions of ordered workqueue

Allow changing ordered workqueue's cpumask
Allow changing ordered workqueue's nice value
Allow registering ordered workqueue to SYSFS

Still disallow changing ordered workqueue's max_active which breaks ordered guarantee
Still disallow changing ordered workqueue's no_numa which breaks ordered guarantee

Changing attributions will introduce new pwqs in a given workqueue, and
old implement doesn't have any solution to handle multi-pwqs on ordered workqueues,
so changing attributions for ordered workqueues is disallowed.

After I investigated several solutions which try to handle multi-pwqs on ordered workqueues,
I found the solution which reuse max_active are the simplest.
In this solution, the new introduced pwq's max_active is kept as *ZERO* until it become
the oldest pwq. Thus only one (the oldest) pwq is active, and ordered is guarantee.

This solution forces ordered on higher level while the non-reentrancy is also
forced. so we don't need to queue any work to its previous pool. And we shouldn't
do it. we must disallow any work repeatedly requeues itself back-to-back
which keeps the oldest pwq stay and make newest(if have) pwq starvation(non-active for ever).

Build test only.
This patch depends on previous patch:
workqueue: add __WQ_FREEZING and remove POOL_FREEZING

Frederic:
	You can pick this patch to your updated patchset before
	TJ apply it.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 kernel/workqueue.c |   66 ++++++++++++++++++++++++++++++---------------------
 1 files changed, 39 insertions(+), 27 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 92c9ada..fadcc4a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1350,8 +1350,14 @@ retry:
 	 * If @work was previously on a different pool, it might still be
 	 * running there, in which case the work needs to be queued on that
 	 * pool to guarantee non-reentrancy.
+	 *
+	 * pwqs are guaranteed active orderly for ordered workqueue, and
+	 * it guarantees non-reentrancy for works. So any work doesn't need
+	 * to be queued on previous pool. And the works shouldn't be queued
+	 * on previous pool, since we need to guarantee the prevous pwq
+	 * releasable to avoid work-stavation on the newest pool.
 	 */
-	last_pool = get_work_pool(work);
+	last_pool = wq->flags & __WQ_ORDERED ? NULL : get_work_pool(work);
 	if (last_pool && last_pool != pwq->pool) {
 		struct worker *worker;
 
@@ -3179,6 +3185,10 @@ static ssize_t wq_numa_store(struct device *dev, struct device_attribute *attr,
 	struct workqueue_attrs *attrs;
 	int v, ret;
 
+	/* Creating per-node pwqs breaks ordering guarantee. Keep no_numa = 1 */
+	if (WARN_ON(wq->flags & __WQ_ORDERED))
+		return -EINVAL;
+
 	attrs = wq_sysfs_prep_attrs(wq);
 	if (!attrs)
 		return -ENOMEM;
@@ -3239,14 +3249,6 @@ int workqueue_sysfs_register(struct workqueue_struct *wq)
 	struct wq_device *wq_dev;
 	int ret;
 
-	/*
-	 * Adjusting max_active or creating new pwqs by applyting
-	 * attributes breaks ordering guarantee.  Disallow exposing ordered
-	 * workqueues.
-	 */
-	if (WARN_ON(wq->flags & __WQ_ORDERED))
-		return -EINVAL;
-
 	wq->wq_dev = wq_dev = kzalloc(sizeof(*wq_dev), GFP_KERNEL);
 	if (!wq_dev)
 		return -ENOMEM;
@@ -3568,6 +3570,13 @@ static void rcu_free_pwq(struct rcu_head *rcu)
 			container_of(rcu, struct pool_workqueue, rcu));
 }
 
+static struct pool_workqueue *oldest_pwq(struct workqueue_struct *wq)
+{
+	return list_last_entry(&wq->pwqs, struct pool_workqueue, pwqs_node);
+}
+
+static void pwq_adjust_max_active(struct pool_workqueue *pwq);
+
 /*
  * Scheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt
  * and needs to be destroyed.
@@ -3583,14 +3592,12 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
 	if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
 		return;
 
-	/*
-	 * Unlink @pwq.  Synchronization against wq->mutex isn't strictly
-	 * necessary on release but do it anyway.  It's easier to verify
-	 * and consistent with the linking path.
-	 */
 	mutex_lock(&wq->mutex);
 	list_del_rcu(&pwq->pwqs_node);
 	is_last = list_empty(&wq->pwqs);
+	/* try to active the oldest pwq when needed */
+	if (!is_last && (wq->flags & __WQ_ORDERED))
+		pwq_adjust_max_active(oldest_pwq(wq));
 	mutex_unlock(&wq->mutex);
 
 	mutex_lock(&wq_pool_mutex);
@@ -3609,6 +3616,16 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
 	}
 }
 
+static bool pwq_active(struct pool_workqueue *pwq)
+{
+	/* Only the oldest pwq is active in the ordered wq */
+	if (pwq->wq->flags & __WQ_ORDERED)
+		return pwq == oldest_pwq(pwq->wq);
+
+	/* All pwqs in the non-ordered wq are active */
+	return true;
+}
+
 /**
  * pwq_adjust_max_active - update a pwq's max_active to the current setting
  * @pwq: target pool_workqueue
@@ -3626,7 +3643,7 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
 
 	spin_lock_irq(&pwq->pool->lock);
 
-	if (!(wq->flags & __WQ_FREEZING)) {
+	if (!(wq->flags & __WQ_FREEZING) && pwq_active(pwq)) {
 		pwq->max_active = wq->saved_max_active;
 
 		while (!list_empty(&pwq->delayed_works) &&
@@ -3680,11 +3697,11 @@ static void link_pwq(struct pool_workqueue *pwq)
 	 */
 	pwq->work_color = wq->work_color;
 
+	/* link in @pwq on the head of &wq->pwqs */
+	list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
+
 	/* sync max_active to the current setting */
 	pwq_adjust_max_active(pwq);
-
-	/* link in @pwq */
-	list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
 }
 
 /* obtain a pool matching @attr and create a pwq associating the pool and @wq */
@@ -3810,8 +3827,8 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
 	if (WARN_ON(!(wq->flags & WQ_UNBOUND)))
 		return -EINVAL;
 
-	/* creating multiple pwqs breaks ordering guarantee */
-	if (WARN_ON((wq->flags & __WQ_ORDERED) && !list_empty(&wq->pwqs)))
+	/* creating multiple per-node pwqs breaks ordering guarantee */
+	if (WARN_ON((wq->flags & __WQ_ORDERED) && !attrs->no_numa))
 		return -EINVAL;
 
 	pwq_tbl = kzalloc(wq_numa_tbl_len * sizeof(pwq_tbl[0]), GFP_KERNEL);
@@ -4004,7 +4021,7 @@ out_unlock:
 static int alloc_and_link_pwqs(struct workqueue_struct *wq)
 {
 	bool highpri = wq->flags & WQ_HIGHPRI;
-	int cpu, ret;
+	int cpu;
 
 	if (!(wq->flags & WQ_UNBOUND)) {
 		wq->cpu_pwqs = alloc_percpu(struct pool_workqueue);
@@ -4025,12 +4042,7 @@ static int alloc_and_link_pwqs(struct workqueue_struct *wq)
 		}
 		return 0;
 	} else if (wq->flags & __WQ_ORDERED) {
-		ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
-		/* there should only be single pwq for ordering guarantee */
-		WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||
-			      wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),
-		     "ordering guarantee broken for workqueue %s\n", wq->name);
-		return ret;
+		return apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
 	} else {
 		return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
 	}
-- 
1.7.4.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] workqueue: allow changing attributions of ordered workqueue
  2014-04-15  9:58 ` [PATCH] workqueue: allow changing attributions of ordered workqueue Lai Jiangshan
@ 2014-04-15 12:25   ` Frederic Weisbecker
  2014-04-15 15:19     ` Lai Jiangshan
  2014-04-23  0:04   ` Frederic Weisbecker
  1 sibling, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-15 12:25 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
> From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
> From: Lai Jiangshan <laijs@cn.fujitsu.com>
> Date: Tue, 15 Apr 2014 17:52:19 +0800
> Subject: [PATCH] workqueue: allow changing attributions of ordered workqueue
> 
> Allow changing ordered workqueue's cpumask
> Allow changing ordered workqueue's nice value
> Allow registering ordered workqueue to SYSFS
> 
> Still disallow changing ordered workqueue's max_active which breaks ordered guarantee
> Still disallow changing ordered workqueue's no_numa which breaks ordered guarantee
> 
> Changing attributions will introduce new pwqs in a given workqueue, and
> old implement doesn't have any solution to handle multi-pwqs on ordered workqueues,
> so changing attributions for ordered workqueues is disallowed.
> 
> After I investigated several solutions which try to handle multi-pwqs on ordered workqueues,
> I found the solution which reuse max_active are the simplest.
> In this solution, the new introduced pwq's max_active is kept as *ZERO* until it become
> the oldest pwq. Thus only one (the oldest) pwq is active, and ordered is guarantee.
> 
> This solution forces ordered on higher level while the non-reentrancy is also
> forced. so we don't need to queue any work to its previous pool. And we shouldn't
> do it. we must disallow any work repeatedly requeues itself back-to-back
> which keeps the oldest pwq stay and make newest(if have) pwq starvation(non-active for ever).
> 
> Build test only.
> This patch depends on previous patch:
> workqueue: add __WQ_FREEZING and remove POOL_FREEZING
> 
> Frederic:
> 	You can pick this patch to your updated patchset before
> 	TJ apply it.
> 
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>

So IUUC this is to allow registering ordered workqueue as WQ_SYSFS? But I think
we are going to follow Tejun's suggestion to have a low level cpumask which defines
the limit for all unbound workqueues. This indeed tremendously simplifies everyting.
I'll post the series soon.

But maybe I'm missing other requirements that are fixed by your patch?

Thanks.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] workqueue: allow changing attributions of ordered workqueue
  2014-04-15 12:25   ` Frederic Weisbecker
@ 2014-04-15 15:19     ` Lai Jiangshan
  0 siblings, 0 replies; 24+ messages in thread
From: Lai Jiangshan @ 2014-04-15 15:19 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

>
> So IUUC this is to allow registering ordered workqueue as WQ_SYSFS? But I think

Making ordered workqueue's attributions changeable was listed in my TODO list.
I found it may help for you, so I prioritized it.
Never mind, I will push my patch separated if you don't need it.

> we are going to follow Tejun's suggestion to have a low level cpumask which defines
> the limit for all unbound workqueues. This indeed tremendously simplifies everyting.
> I'll post the series soon.

It sounds feasible. I look forward to your patches

>
> But maybe I'm missing other requirements that are fixed by your patch?
>
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] workqueue: allow changing attributions of ordered workqueue
  2014-04-15  9:58 ` [PATCH] workqueue: allow changing attributions of ordered workqueue Lai Jiangshan
  2014-04-15 12:25   ` Frederic Weisbecker
@ 2014-04-23  0:04   ` Frederic Weisbecker
  2014-04-23  2:16     ` Lai Jiangshan
  1 sibling, 1 reply; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-23  0:04 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

Hi Lai,

So actually I'll need to use apply_workqueue_attr() on the next patchset. So
I'm considering this patch.

Some comments below:

On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
> From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
> From: Lai Jiangshan <laijs@cn.fujitsu.com>
> Date: Tue, 15 Apr 2014 17:52:19 +0800
> Subject: [PATCH] workqueue: allow changing attributions of ordered workqueue
> 
> Allow changing ordered workqueue's cpumask
> Allow changing ordered workqueue's nice value
> Allow registering ordered workqueue to SYSFS
> 
> Still disallow changing ordered workqueue's max_active which breaks ordered guarantee
> Still disallow changing ordered workqueue's no_numa which breaks ordered guarantee
> 
> Changing attributions will introduce new pwqs in a given workqueue, and
> old implement doesn't have any solution to handle multi-pwqs on ordered workqueues,
> so changing attributions for ordered workqueues is disallowed.
> 
> After I investigated several solutions which try to handle multi-pwqs on ordered workqueues,
> I found the solution which reuse max_active are the simplest.
> In this solution, the new introduced pwq's max_active is kept as *ZERO* until it become
> the oldest pwq.

I don't see where this zero value is enforced. Do you mean 1? That's the initial value of
ordered max_active pools.

> Thus only one (the oldest) pwq is active, and ordered is guarantee.
> 
> This solution forces ordered on higher level while the non-reentrancy is also
> forced. so we don't need to queue any work to its previous pool. And we shouldn't
> do it. we must disallow any work repeatedly requeues itself back-to-back
> which keeps the oldest pwq stay and make newest(if have) pwq starvation(non-active for ever).
> 
> Build test only.
> This patch depends on previous patch:
> workqueue: add __WQ_FREEZING and remove POOL_FREEZING
> 
> Frederic:
> 	You can pick this patch to your updated patchset before
> 	TJ apply it.
> 
> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
> ---
>  kernel/workqueue.c |   66 ++++++++++++++++++++++++++++++---------------------
>  1 files changed, 39 insertions(+), 27 deletions(-)
> 
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 92c9ada..fadcc4a 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1350,8 +1350,14 @@ retry:
>  	 * If @work was previously on a different pool, it might still be
>  	 * running there, in which case the work needs to be queued on that
>  	 * pool to guarantee non-reentrancy.
> +	 *
> +	 * pwqs are guaranteed active orderly for ordered workqueue, and
> +	 * it guarantees non-reentrancy for works. So any work doesn't need
> +	 * to be queued on previous pool. And the works shouldn't be queued
> +	 * on previous pool, since we need to guarantee the prevous pwq
> +	 * releasable to avoid work-stavation on the newest pool.

BTW, who needs this reentrancy guarantee? Is this an implicit guarantee for
all workqueues that is there for backward compatibility? I've seen some
patches dealing with that lately but I don't recall the details.

>  	 */
> -	last_pool = get_work_pool(work);
> +	last_pool = wq->flags & __WQ_ORDERED ? NULL : get_work_pool(work);

Does it hurt performance to do this old worker recovery? It seems to
when I look at get_work_pool() and find_worker_executing_pool().

Perhaps we can generalize this check to all wqs which have only one
worker?

Anyway that's just optimizations. Nothing that needs to be done in this
patch.

>  	if (last_pool && last_pool != pwq->pool) {
>  		struct worker *worker;
>  
> @@ -3179,6 +3185,10 @@ static ssize_t wq_numa_store(struct device *dev, struct device_attribute *attr,
>  	struct workqueue_attrs *attrs;
>  	int v, ret;
>  
> +	/* Creating per-node pwqs breaks ordering guarantee. Keep no_numa = 1 */
> +	if (WARN_ON(wq->flags & __WQ_ORDERED))
> +		return -EINVAL;
> +
>  	attrs = wq_sysfs_prep_attrs(wq);
>  	if (!attrs)
>  		return -ENOMEM;
> @@ -3239,14 +3249,6 @@ int workqueue_sysfs_register(struct workqueue_struct *wq)
>  	struct wq_device *wq_dev;
>  	int ret;
>  
> -	/*
> -	 * Adjusting max_active or creating new pwqs by applyting
> -	 * attributes breaks ordering guarantee.  Disallow exposing ordered
> -	 * workqueues.
> -	 */
> -	if (WARN_ON(wq->flags & __WQ_ORDERED))
> -		return -EINVAL;
> -
>  	wq->wq_dev = wq_dev = kzalloc(sizeof(*wq_dev), GFP_KERNEL);
>  	if (!wq_dev)
>  		return -ENOMEM;
> @@ -3568,6 +3570,13 @@ static void rcu_free_pwq(struct rcu_head *rcu)
>  			container_of(rcu, struct pool_workqueue, rcu));
>  }
>  
> +static struct pool_workqueue *oldest_pwq(struct workqueue_struct *wq)
> +{
> +	return list_last_entry(&wq->pwqs, struct pool_workqueue, pwqs_node);
> +}
> +
> +static void pwq_adjust_max_active(struct pool_workqueue *pwq);
> +
>  /*
>   * Scheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt
>   * and needs to be destroyed.
> @@ -3583,14 +3592,12 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
>  	if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
>  		return;
>  
> -	/*
> -	 * Unlink @pwq.  Synchronization against wq->mutex isn't strictly
> -	 * necessary on release but do it anyway.  It's easier to verify
> -	 * and consistent with the linking path.
> -	 */
>  	mutex_lock(&wq->mutex);
>  	list_del_rcu(&pwq->pwqs_node);
>  	is_last = list_empty(&wq->pwqs);
> +	/* try to active the oldest pwq when needed */
> +	if (!is_last && (wq->flags & __WQ_ORDERED))
> +		pwq_adjust_max_active(oldest_pwq(wq));

So this switches the new pwq's max active from 0 to 1?
I still can't find where it is set to 0.

>  	mutex_unlock(&wq->mutex);
>  
>  	mutex_lock(&wq_pool_mutex);
> @@ -3609,6 +3616,16 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
>  	}
>  }
>  
> +static bool pwq_active(struct pool_workqueue *pwq)
> +{
> +	/* Only the oldest pwq is active in the ordered wq */
> +	if (pwq->wq->flags & __WQ_ORDERED)
> +		return pwq == oldest_pwq(pwq->wq);
> +
> +	/* All pwqs in the non-ordered wq are active */
> +	return true;
> +}
> +
>  /**
>   * pwq_adjust_max_active - update a pwq's max_active to the current setting
>   * @pwq: target pool_workqueue
> @@ -3626,7 +3643,7 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
>  
>  	spin_lock_irq(&pwq->pool->lock);
>  
> -	if (!(wq->flags & __WQ_FREEZING)) {
> +	if (!(wq->flags & __WQ_FREEZING) && pwq_active(pwq)) {

Ah! So that must be how we initially set the new max_active to 0.

>  		pwq->max_active = wq->saved_max_active;
>  
>  		while (!list_empty(&pwq->delayed_works) &&
> @@ -3680,11 +3697,11 @@ static void link_pwq(struct pool_workqueue *pwq)
>  	 */
>  	pwq->work_color = wq->work_color;
>  
> +	/* link in @pwq on the head of &wq->pwqs */
> +	list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
> +
>  	/* sync max_active to the current setting */
>  	pwq_adjust_max_active(pwq);
> -
> -	/* link in @pwq */
> -	list_add_rcu(&pwq->pwqs_node, &wq->pwqs);
>  }
>  
>  /* obtain a pool matching @attr and create a pwq associating the pool and @wq */
> @@ -3810,8 +3827,8 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
>  	if (WARN_ON(!(wq->flags & WQ_UNBOUND)))
>  		return -EINVAL;
>  
> -	/* creating multiple pwqs breaks ordering guarantee */
> -	if (WARN_ON((wq->flags & __WQ_ORDERED) && !list_empty(&wq->pwqs)))
> +	/* creating multiple per-node pwqs breaks ordering guarantee */
> +	if (WARN_ON((wq->flags & __WQ_ORDERED) && !attrs->no_numa))
>  		return -EINVAL;
>  
>  	pwq_tbl = kzalloc(wq_numa_tbl_len * sizeof(pwq_tbl[0]), GFP_KERNEL);
> @@ -4004,7 +4021,7 @@ out_unlock:
>  static int alloc_and_link_pwqs(struct workqueue_struct *wq)
>  {
>  	bool highpri = wq->flags & WQ_HIGHPRI;
> -	int cpu, ret;
> +	int cpu;
>  
>  	if (!(wq->flags & WQ_UNBOUND)) {
>  		wq->cpu_pwqs = alloc_percpu(struct pool_workqueue);
> @@ -4025,12 +4042,7 @@ static int alloc_and_link_pwqs(struct workqueue_struct *wq)
>  		}
>  		return 0;
>  	} else if (wq->flags & __WQ_ORDERED) {
> -		ret = apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
> -		/* there should only be single pwq for ordering guarantee */
> -		WARN(!ret && (wq->pwqs.next != &wq->dfl_pwq->pwqs_node ||
> -			      wq->pwqs.prev != &wq->dfl_pwq->pwqs_node),
> -		     "ordering guarantee broken for workqueue %s\n", wq->name);
> -		return ret;
> +		return apply_workqueue_attrs(wq, ordered_wq_attrs[highpri]);
>  	} else {
>  		return apply_workqueue_attrs(wq, unbound_std_wq_attrs[highpri]);
>  	}

So I have mixed feelings with this patch given the complication. But it's probably
better to take that direction.

I just wish we had some way to automatically detect situations where a work
mistakenly runs through re-entrancy. Because if it ever happens randomly,
it's going to be a hell to debug.

Thanks.

> -- 
> 1.7.4.4
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] workqueue: allow changing attributions of ordered workqueue
  2014-04-23  0:04   ` Frederic Weisbecker
@ 2014-04-23  2:16     ` Lai Jiangshan
  0 siblings, 0 replies; 24+ messages in thread
From: Lai Jiangshan @ 2014-04-23  2:16 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo, Viresh Kumar

On 04/23/2014 08:04 AM, Frederic Weisbecker wrote:
> Hi Lai,
> 
> So actually I'll need to use apply_workqueue_attr() on the next patchset. So
> I'm considering this patch.
> 
> Some comments below:
> 
> On Tue, Apr 15, 2014 at 05:58:08PM +0800, Lai Jiangshan wrote:
>> From 534f1df8a5a03427b0fc382150fbd34e05648a28 Mon Sep 17 00:00:00 2001
>> From: Lai Jiangshan <laijs@cn.fujitsu.com>
>> Date: Tue, 15 Apr 2014 17:52:19 +0800
>> Subject: [PATCH] workqueue: allow changing attributions of ordered workqueue
>>
>> Allow changing ordered workqueue's cpumask
>> Allow changing ordered workqueue's nice value
>> Allow registering ordered workqueue to SYSFS
>>
>> Still disallow changing ordered workqueue's max_active which breaks ordered guarantee
>> Still disallow changing ordered workqueue's no_numa which breaks ordered guarantee
>>
>> Changing attributions will introduce new pwqs in a given workqueue, and
>> old implement doesn't have any solution to handle multi-pwqs on ordered workqueues,
>> so changing attributions for ordered workqueues is disallowed.
>>
>> After I investigated several solutions which try to handle multi-pwqs on ordered workqueues,
>> I found the solution which reuse max_active are the simplest.
>> In this solution, the new introduced pwq's max_active is kept as *ZERO* until it become
>> the oldest pwq.
> 
> I don't see where this zero value is enforced. Do you mean 1? That's the initial value of
> ordered max_active pools.

pwq's max_active is force zero in pwq_adjust_max_active().
If the older pwq is still existed, the newer one's max_active is forced zero.

> 
>> Thus only one (the oldest) pwq is active, and ordered is guarantee.
>>
>> This solution forces ordered on higher level while the non-reentrancy is also
>> forced. so we don't need to queue any work to its previous pool. And we shouldn't
>> do it. we must disallow any work repeatedly requeues itself back-to-back
>> which keeps the oldest pwq stay and make newest(if have) pwq starvation(non-active for ever).
>>
>> Build test only.
>> This patch depends on previous patch:
>> workqueue: add __WQ_FREEZING and remove POOL_FREEZING
>>
>> Frederic:
>> 	You can pick this patch to your updated patchset before
>> 	TJ apply it.
>>
>> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
>> ---
>>  kernel/workqueue.c |   66 ++++++++++++++++++++++++++++++---------------------
>>  1 files changed, 39 insertions(+), 27 deletions(-)
>>
>> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
>> index 92c9ada..fadcc4a 100644
>> --- a/kernel/workqueue.c
>> +++ b/kernel/workqueue.c
>> @@ -1350,8 +1350,14 @@ retry:
>>  	 * If @work was previously on a different pool, it might still be
>>  	 * running there, in which case the work needs to be queued on that
>>  	 * pool to guarantee non-reentrancy.
>> +	 *
>> +	 * pwqs are guaranteed active orderly for ordered workqueue, and
>> +	 * it guarantees non-reentrancy for works. So any work doesn't need
>> +	 * to be queued on previous pool. And the works shouldn't be queued
>> +	 * on previous pool, since we need to guarantee the prevous pwq
>> +	 * releasable to avoid work-stavation on the newest pool.
> 
> BTW, who needs this reentrancy guarantee? Is this an implicit guarantee for
> all workqueues that is there for backward compatibility? I've seen some
> patches dealing with that lately but I don't recall the details.
> 

dbf2576e37da0fcc7aacbfbb9fd5d3de7888a3c1

>>  	 */
>> -	last_pool = get_work_pool(work);
>> +	last_pool = wq->flags & __WQ_ORDERED ? NULL : get_work_pool(work);
> 
> Does it hurt performance to do this old worker recovery? It seems to
> when I look at get_work_pool() and find_worker_executing_pool().
> 
> Perhaps we can generalize this check to all wqs which have only one
> worker?
> 
> Anyway that's just optimizations. Nothing that needs to be done in this
> patch.
> 
[...]
> 
> So I have mixed feelings with this patch given the complication. But it's probably
> better to take that direction.

Any feeling is welcome to share here.

> 
> I just wish we had some way to automatically detect situations where a work
> mistakenly runs through re-entrancy. Because if it ever happens randomly,
> it's going to be a hell to debug.

dbf2576e37da0fcc7aacbfbb9fd5d3de7888a3c1 has forced non-reentrant and
is well reviewed. Any additional automatically detect is also welcome
for debugging. But I don't think it is required for your aim or this patch.

> 
> Thanks.
> 
>> -- 
>> 1.7.4.4
>>
> .
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute
  2014-04-03  7:09   ` Viresh Kumar
@ 2014-04-24 13:48     ` Frederic Weisbecker
  0 siblings, 0 replies; 24+ messages in thread
From: Frederic Weisbecker @ 2014-04-24 13:48 UTC (permalink / raw)
  To: Viresh Kumar
  Cc: LKML, Christoph Lameter, Kevin Hilman, Mike Galbraith,
	Paul E. McKenney, Tejun Heo

Hi Viresh,

On Thu, Apr 03, 2014 at 12:39:37PM +0530, Viresh Kumar wrote:
> Nothing much, just some nitpicks :)

Thanks for your reviews, but I'm eventually dropping these two patches :)

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-04-24 13:48 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-27 17:20 [PATCH 0/4] workqueue: Control cpu affinity of all unbound workqueues v2 Frederic Weisbecker
2014-03-27 17:20 ` [PATCH 1/4] workqueue: Move workqueue bus attr to device attribute Frederic Weisbecker
2014-04-03  7:09   ` Viresh Kumar
2014-04-24 13:48     ` Frederic Weisbecker
2014-03-27 17:21 ` [PATCH 2/4] workqueues: Account unbound workqueue in a seperate list Frederic Weisbecker
2014-03-30 12:57   ` Tejun Heo
2014-04-03 14:48     ` Frederic Weisbecker
2014-04-03 15:01       ` Tejun Heo
2014-04-03 15:43         ` Frederic Weisbecker
2014-03-27 17:21 ` [PATCH 3/4] workqueue: Add anon workqueue sysfs hierarchy Frederic Weisbecker
2014-03-30 13:01   ` Tejun Heo
2014-04-03 14:42     ` Frederic Weisbecker
2014-04-03 14:58       ` Tejun Heo
2014-04-03 15:05         ` Frederic Weisbecker
2014-04-03  7:07   ` Viresh Kumar
2014-03-27 17:21 ` [PATCH 4/4] workqueue: Include ordered workqueues in anon workqueue sysfs interface Frederic Weisbecker
2014-03-31 12:50   ` Lai Jiangshan
2014-03-31 13:15     ` Lai Jiangshan
2014-04-03 15:59       ` Frederic Weisbecker
2014-04-15  9:58 ` [PATCH] workqueue: allow changing attributions of ordered workqueue Lai Jiangshan
2014-04-15 12:25   ` Frederic Weisbecker
2014-04-15 15:19     ` Lai Jiangshan
2014-04-23  0:04   ` Frederic Weisbecker
2014-04-23  2:16     ` Lai Jiangshan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.