All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Metcalf <cmetcalf@tilera.com>
To: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	Tejun Heo <tj@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Cody P Schafer <cody@linux.vnet.ibm.com>
Subject: [PATCH v6 1/2] workqueue: add new schedule_on_cpu_mask() API
Date: Fri, 9 Aug 2013 13:49:44 -0400	[thread overview]
Message-ID: <201308091847.r79IlimS015092@farm-0021.internal.tilera.com> (raw)
In-Reply-To: <20130809174009.GV20515@mtj.dyndns.org>

This primitive allows scheduling work to run on a particular set of
cpus described by a "struct cpumask".  This can be useful, for example,
if you have a per-cpu variable that requires code execution only if the
per-cpu variable has a certain value (for example, is a non-empty list).

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
---
v6: add Tejun's Acked-by, and add missing get/put_cpu_online to
lru_add_drain_all().

v5: provide validity checking on the cpumask for schedule_on_cpu_mask.
By providing an all-or-nothing EINVAL check, we impose the requirement
that the calling code actually know clearly what it's trying to do.
(Note: no change to the mm/swap.c commit)

v4: don't lose possible -ENOMEM in schedule_on_each_cpu()
(Note: no change to the mm/swap.c commit)

v3: split commit into two, one for workqueue and one for mm, though both
should probably be taken through -mm.

 include/linux/workqueue.h |  3 +++
 kernel/workqueue.c        | 51 ++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 45 insertions(+), 9 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0ed78a..71a3fe7 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -13,6 +13,8 @@
 #include <linux/atomic.h>
 #include <linux/cpumask.h>
 
+struct cpumask;
+
 struct workqueue_struct;
 
 struct work_struct;
@@ -470,6 +472,7 @@ extern void flush_workqueue(struct workqueue_struct *wq);
 extern void drain_workqueue(struct workqueue_struct *wq);
 extern void flush_scheduled_work(void);
 
+extern int schedule_on_cpu_mask(work_func_t func, const struct cpumask *mask);
 extern int schedule_on_each_cpu(work_func_t func);
 
 int execute_in_process_context(work_func_t fn, struct execute_work *);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f02c4a4..63d504a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -292,6 +292,9 @@ static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
 static LIST_HEAD(workqueues);		/* PL: list of all workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
 
+/* set of cpus that are valid for per-cpu workqueue scheduling */
+static struct cpumask wq_valid_cpus;
+
 /* the per-cpu worker pools */
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
 				     cpu_worker_pools);
@@ -2962,43 +2965,66 @@ bool cancel_delayed_work_sync(struct delayed_work *dwork)
 EXPORT_SYMBOL(cancel_delayed_work_sync);
 
 /**
- * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * schedule_on_cpu_mask - execute a function synchronously on each listed CPU
  * @func: the function to call
+ * @mask: the cpumask to invoke the function on
  *
- * schedule_on_each_cpu() executes @func on each online CPU using the
+ * schedule_on_cpu_mask() executes @func on each listed CPU using the
  * system workqueue and blocks until all CPUs have completed.
- * schedule_on_each_cpu() is very slow.
+ * schedule_on_cpu_mask() is very slow.  You may only specify CPUs
+ * that are online or have previously been online; specifying an
+ * invalid CPU mask will return -EINVAL without scheduling any work.
  *
  * RETURNS:
  * 0 on success, -errno on failure.
  */
-int schedule_on_each_cpu(work_func_t func)
+int schedule_on_cpu_mask(work_func_t func, const struct cpumask *mask)
 {
 	int cpu;
 	struct work_struct __percpu *works;
 
+	if (!cpumask_subset(mask, &wq_valid_cpus))
+		return -EINVAL;
+
 	works = alloc_percpu(struct work_struct);
 	if (!works)
 		return -ENOMEM;
 
-	get_online_cpus();
-
-	for_each_online_cpu(cpu) {
+	for_each_cpu(cpu, mask) {
 		struct work_struct *work = per_cpu_ptr(works, cpu);
 
 		INIT_WORK(work, func);
 		schedule_work_on(cpu, work);
 	}
 
-	for_each_online_cpu(cpu)
+	for_each_cpu(cpu, mask)
 		flush_work(per_cpu_ptr(works, cpu));
 
-	put_online_cpus();
 	free_percpu(works);
 	return 0;
 }
 
 /**
+ * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * @func: the function to call
+ *
+ * schedule_on_each_cpu() executes @func on each online CPU using the
+ * system workqueue and blocks until all CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * RETURNS:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpu(work_func_t func)
+{
+	int ret;
+	get_online_cpus();
+	ret = schedule_on_cpu_mask(func, cpu_online_mask);
+	put_online_cpus();
+	return ret;
+}
+
+/**
  * flush_scheduled_work - ensure that any scheduled work has run to completion.
  *
  * Forces execution of the kernel-global workqueue and blocks until its
@@ -4687,6 +4713,9 @@ static int __cpuinit workqueue_cpu_up_callback(struct notifier_block *nfb,
 		list_for_each_entry(wq, &workqueues, list)
 			wq_update_unbound_numa(wq, cpu, true);
 
+		/* track the set of cpus that have ever been online */
+		cpumask_set_cpu(cpu, &wq_valid_cpus);
+
 		mutex_unlock(&wq_pool_mutex);
 		break;
 	}
@@ -5011,6 +5040,10 @@ static int __init init_workqueues(void)
 	       !system_unbound_wq || !system_freezable_wq ||
 	       !system_power_efficient_wq ||
 	       !system_freezable_power_efficient_wq);
+
+	/* mark startup cpu as valid */
+	cpumask_set_cpu(smp_processor_id(), &wq_valid_cpus);
+
 	return 0;
 }
 early_initcall(init_workqueues);
-- 
1.8.3.1


WARNING: multiple messages have this Message-ID (diff)
From: Chris Metcalf <cmetcalf@tilera.com>
To: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Tejun Heo <tj@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Cody P Schafer <cody@linux.vnet.ibm.com>
Subject: [PATCH v6 1/2] workqueue: add new schedule_on_cpu_mask() API
Date: Fri, 9 Aug 2013 13:49:44 -0400	[thread overview]
Message-ID: <201308091847.r79IlimS015092@farm-0021.internal.tilera.com> (raw)
In-Reply-To: <20130809174009.GV20515@mtj.dyndns.org>

This primitive allows scheduling work to run on a particular set of
cpus described by a "struct cpumask".  This can be useful, for example,
if you have a per-cpu variable that requires code execution only if the
per-cpu variable has a certain value (for example, is a non-empty list).

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
---
v6: add Tejun's Acked-by, and add missing get/put_cpu_online to
lru_add_drain_all().

v5: provide validity checking on the cpumask for schedule_on_cpu_mask.
By providing an all-or-nothing EINVAL check, we impose the requirement
that the calling code actually know clearly what it's trying to do.
(Note: no change to the mm/swap.c commit)

v4: don't lose possible -ENOMEM in schedule_on_each_cpu()
(Note: no change to the mm/swap.c commit)

v3: split commit into two, one for workqueue and one for mm, though both
should probably be taken through -mm.

 include/linux/workqueue.h |  3 +++
 kernel/workqueue.c        | 51 ++++++++++++++++++++++++++++++++++++++---------
 2 files changed, 45 insertions(+), 9 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0ed78a..71a3fe7 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -13,6 +13,8 @@
 #include <linux/atomic.h>
 #include <linux/cpumask.h>
 
+struct cpumask;
+
 struct workqueue_struct;
 
 struct work_struct;
@@ -470,6 +472,7 @@ extern void flush_workqueue(struct workqueue_struct *wq);
 extern void drain_workqueue(struct workqueue_struct *wq);
 extern void flush_scheduled_work(void);
 
+extern int schedule_on_cpu_mask(work_func_t func, const struct cpumask *mask);
 extern int schedule_on_each_cpu(work_func_t func);
 
 int execute_in_process_context(work_func_t fn, struct execute_work *);
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f02c4a4..63d504a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -292,6 +292,9 @@ static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
 static LIST_HEAD(workqueues);		/* PL: list of all workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
 
+/* set of cpus that are valid for per-cpu workqueue scheduling */
+static struct cpumask wq_valid_cpus;
+
 /* the per-cpu worker pools */
 static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS],
 				     cpu_worker_pools);
@@ -2962,43 +2965,66 @@ bool cancel_delayed_work_sync(struct delayed_work *dwork)
 EXPORT_SYMBOL(cancel_delayed_work_sync);
 
 /**
- * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * schedule_on_cpu_mask - execute a function synchronously on each listed CPU
  * @func: the function to call
+ * @mask: the cpumask to invoke the function on
  *
- * schedule_on_each_cpu() executes @func on each online CPU using the
+ * schedule_on_cpu_mask() executes @func on each listed CPU using the
  * system workqueue and blocks until all CPUs have completed.
- * schedule_on_each_cpu() is very slow.
+ * schedule_on_cpu_mask() is very slow.  You may only specify CPUs
+ * that are online or have previously been online; specifying an
+ * invalid CPU mask will return -EINVAL without scheduling any work.
  *
  * RETURNS:
  * 0 on success, -errno on failure.
  */
-int schedule_on_each_cpu(work_func_t func)
+int schedule_on_cpu_mask(work_func_t func, const struct cpumask *mask)
 {
 	int cpu;
 	struct work_struct __percpu *works;
 
+	if (!cpumask_subset(mask, &wq_valid_cpus))
+		return -EINVAL;
+
 	works = alloc_percpu(struct work_struct);
 	if (!works)
 		return -ENOMEM;
 
-	get_online_cpus();
-
-	for_each_online_cpu(cpu) {
+	for_each_cpu(cpu, mask) {
 		struct work_struct *work = per_cpu_ptr(works, cpu);
 
 		INIT_WORK(work, func);
 		schedule_work_on(cpu, work);
 	}
 
-	for_each_online_cpu(cpu)
+	for_each_cpu(cpu, mask)
 		flush_work(per_cpu_ptr(works, cpu));
 
-	put_online_cpus();
 	free_percpu(works);
 	return 0;
 }
 
 /**
+ * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * @func: the function to call
+ *
+ * schedule_on_each_cpu() executes @func on each online CPU using the
+ * system workqueue and blocks until all CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * RETURNS:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpu(work_func_t func)
+{
+	int ret;
+	get_online_cpus();
+	ret = schedule_on_cpu_mask(func, cpu_online_mask);
+	put_online_cpus();
+	return ret;
+}
+
+/**
  * flush_scheduled_work - ensure that any scheduled work has run to completion.
  *
  * Forces execution of the kernel-global workqueue and blocks until its
@@ -4687,6 +4713,9 @@ static int __cpuinit workqueue_cpu_up_callback(struct notifier_block *nfb,
 		list_for_each_entry(wq, &workqueues, list)
 			wq_update_unbound_numa(wq, cpu, true);
 
+		/* track the set of cpus that have ever been online */
+		cpumask_set_cpu(cpu, &wq_valid_cpus);
+
 		mutex_unlock(&wq_pool_mutex);
 		break;
 	}
@@ -5011,6 +5040,10 @@ static int __init init_workqueues(void)
 	       !system_unbound_wq || !system_freezable_wq ||
 	       !system_power_efficient_wq ||
 	       !system_freezable_power_efficient_wq);
+
+	/* mark startup cpu as valid */
+	cpumask_set_cpu(smp_processor_id(), &wq_valid_cpus);
+
 	return 0;
 }
 early_initcall(init_workqueues);
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-08-09 18:47 UTC|newest]

Thread overview: 104+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-06 20:22 [PATCH] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-06 20:22 ` Chris Metcalf
2013-08-06 20:22 ` [PATCH v2] " Chris Metcalf
2013-08-06 20:22   ` Chris Metcalf
2013-08-07 20:45   ` Tejun Heo
2013-08-07 20:45     ` Tejun Heo
2013-08-07 20:49     ` [PATCH v3 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf
2013-08-07 20:49       ` Chris Metcalf
2013-08-07 20:52     ` [PATCH v3 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-07 20:52       ` Chris Metcalf
2013-08-07 22:48   ` [PATCH v2] " Cody P Schafer
2013-08-07 22:48     ` Cody P Schafer
2013-08-07 20:49     ` [PATCH v4 1/2] workqueue: add new schedule_on_cpu_mask() API Chris Metcalf
2013-08-07 20:49       ` Chris Metcalf
2013-08-09 15:02       ` Tejun Heo
2013-08-09 15:02         ` Tejun Heo
2013-08-09 16:12         ` Chris Metcalf
2013-08-09 16:12           ` Chris Metcalf
2013-08-09 16:30           ` Tejun Heo
2013-08-09 16:30             ` Tejun Heo
2013-08-07 20:49             ` [PATCH v5 " Chris Metcalf
2013-08-07 20:49               ` Chris Metcalf
2013-08-09 17:40               ` Tejun Heo
2013-08-09 17:40                 ` Tejun Heo
2013-08-09 17:49                 ` Chris Metcalf [this message]
2013-08-09 17:49                   ` [PATCH v6 " Chris Metcalf
2013-08-09 17:52                 ` [PATCH v6 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-09 17:52                   ` Chris Metcalf
2013-08-07 20:52             ` [PATCH v5 " Chris Metcalf
2013-08-07 20:52               ` Chris Metcalf
2013-08-07 20:52     ` [PATCH v4 " Chris Metcalf
2013-08-07 20:52       ` Chris Metcalf
2013-08-12 21:05       ` Andrew Morton
2013-08-12 21:05         ` Andrew Morton
2013-08-13  1:53         ` Chris Metcalf
2013-08-13  1:53           ` Chris Metcalf
2013-08-13 19:35           ` Andrew Morton
2013-08-13 19:35             ` Andrew Morton
2013-08-13 20:19             ` Tejun Heo
2013-08-13 20:19               ` Tejun Heo
2013-08-13 20:31               ` Andrew Morton
2013-08-13 20:31                 ` Andrew Morton
2013-08-13 20:59                 ` Chris Metcalf
2013-08-13 20:59                   ` Chris Metcalf
2013-08-13 21:13                   ` Andrew Morton
2013-08-13 21:13                     ` Andrew Morton
2013-08-13 22:13                     ` Chris Metcalf
2013-08-13 22:13                       ` Chris Metcalf
2013-08-13 22:26                       ` Andrew Morton
2013-08-13 22:26                         ` Andrew Morton
2013-08-13 23:04                         ` Chris Metcalf
2013-08-13 23:04                           ` Chris Metcalf
2013-08-13 22:51                       ` [PATCH v7 1/2] workqueue: add schedule_on_each_cpu_cond Chris Metcalf
2013-08-13 22:51                         ` Chris Metcalf
2013-08-13 22:53                       ` [PATCH v7 2/2] mm: make lru_add_drain_all() selective Chris Metcalf
2013-08-13 22:53                         ` Chris Metcalf
2013-08-13 23:29                         ` Tejun Heo
2013-08-13 23:29                           ` Tejun Heo
2013-08-13 23:32                           ` Chris Metcalf
2013-08-13 23:32                             ` Chris Metcalf
2013-08-14  6:46                             ` Andrew Morton
2013-08-14  6:46                               ` Andrew Morton
2013-08-14 13:05                               ` Tejun Heo
2013-08-14 13:05                                 ` Tejun Heo
2013-08-14 16:03                               ` Chris Metcalf
2013-08-14 16:03                                 ` Chris Metcalf
2013-08-14 16:57                                 ` Tejun Heo
2013-08-14 16:57                                   ` Tejun Heo
2013-08-14 17:18                                   ` Chris Metcalf
2013-08-14 17:18                                     ` Chris Metcalf
2013-08-14 20:07                                     ` Tejun Heo
2013-08-14 20:07                                       ` Tejun Heo
2013-08-14 20:22                                       ` [PATCH v8] " Chris Metcalf
2013-08-14 20:22                                         ` Chris Metcalf
2013-08-14 20:44                                         ` Andrew Morton
2013-08-14 20:44                                           ` Andrew Morton
2013-08-14 20:50                                           ` Tejun Heo
2013-08-14 20:50                                             ` Tejun Heo
2013-08-14 21:03                                             ` Andrew Morton
2013-08-14 21:03                                               ` Andrew Morton
2013-08-14 21:07                                             ` Andrew Morton
2013-08-14 21:07                                               ` Andrew Morton
2013-08-14 21:12                                         ` Andrew Morton
2013-08-14 21:12                                           ` Andrew Morton
2013-08-14 21:23                                           ` Chris Metcalf
2013-08-14 21:23                                             ` Chris Metcalf
2013-08-13 23:44                           ` [PATCH v7 2/2] " Chris Metcalf
2013-08-13 23:44                             ` Chris Metcalf
2013-08-13 23:51                             ` Tejun Heo
2013-08-13 23:51                               ` Tejun Heo
2013-08-13 21:07                 ` [PATCH v4 " Tejun Heo
2013-08-13 21:07                   ` Tejun Heo
2013-08-13 21:16                   ` Andrew Morton
2013-08-13 21:16                     ` Andrew Morton
2013-08-13 22:07                     ` Tejun Heo
2013-08-13 22:07                       ` Tejun Heo
2013-08-13 22:18                       ` Andrew Morton
2013-08-13 22:18                         ` Andrew Morton
2013-08-13 22:33                         ` Tejun Heo
2013-08-13 22:33                           ` Tejun Heo
2013-08-13 22:47                           ` Andrew Morton
2013-08-13 22:47                             ` Andrew Morton
2013-08-13 23:03                             ` Tejun Heo
2013-08-13 23:03                               ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201308091847.r79IlimS015092@farm-0021.internal.tilera.com \
    --to=cmetcalf@tilera.com \
    --cc=cody@linux.vnet.ibm.com \
    --cc=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.