linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls
@ 2018-10-15 15:09 Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 1/6] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.

This patch set originally started out with me focused on just the one call
to async_schedule_domain in the nvdimm tree that was being used to
defer the device_add call however after doing some digging I realized the
scope of this was much broader than I had originally planned. As such I
went through and reworked the underlying infrastructure down to replacing
the queue_work call itself with a function of my own and opted to try and
provide a NUMA aware solution that would work for a broader audience.

RFC->v1:
    Dropped nvdimm patch to submit later
        It relies on code in libnvdimm development tree
    Simplified queue_work_near to just convert node into a CPU
    Split up drivers core and PM core patches
v1->v2:
    Renamed queue_work_near to queue_work_node
    Added WARN_ON_ONCE if we use queue_work_node with per-cpu workqueue
v2->v3:
    Added Acked-by for queue_work_node patch
    Continued rename from _near to _node to be consistent with queue_work_node
        Renamed async_schedule_near_domain to async_schedule_node_domain
        Renamed async_schedule_near to async_schedule_node
    Added kerneldoc for new async_schedule_XXX functions
    Updated patch description for patch 4 to include data on potential gains
v3->v4
    Added patch to consolidate use of need_parent_lock
    Make asynchronous driver probing explicit about use of drvdata
    Added Acked-by for PM core patch

---

Alexander Duyck (6):
      workqueue: Provide queue_work_node to queue work near a given NUMA node
      async: Add support for queueing on specific NUMA node
      device core: Consolidate locking and unlocking of parent and device
      driver core: Probe devices asynchronously instead of the driver
      driver core: Attach devices on CPU local to device node
      PM core: Use new async_schedule_dev command


 drivers/base/base.h       |    2 +
 drivers/base/bus.c        |   46 ++--------------
 drivers/base/dd.c         |  130 ++++++++++++++++++++++++++++++++++++++++-----
 drivers/base/power/main.c |   12 ++--
 include/linux/async.h     |   84 ++++++++++++++++++++++++++++-
 include/linux/device.h    |    4 +
 include/linux/workqueue.h |    2 +
 kernel/async.c            |   53 ++++++++++--------
 kernel/workqueue.c        |   84 +++++++++++++++++++++++++++++
 9 files changed, 329 insertions(+), 88 deletions(-)

--

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 1/6] workqueue: Provide queue_work_node to queue work near a given NUMA node
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 2/6] async: Add support for queueing on specific " Alexander Duyck
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This patch provides a new function queue_work_node which is meant to
schedule work on a "random" CPU of the requested NUMA node. The main
motivation for this is to help assist asynchronous init to better improve
boot times for devices that are local to a specific node.

For now we just default to the first CPU that is in the intersection of the
cpumask of the node and the online cpumask. The only exception is if the
CPU is local to the node we will just use the current CPU. This should work
for our purposes as we are currently only using this for unbound work so
the CPU will be translated to a node anyway instead of being directly used.

As we are only using the first CPU to represent the NUMA node for now I am
limiting the scope of the function so that it can only be used with unbound
workqueues.

Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 include/linux/workqueue.h |    2 +
 kernel/workqueue.c        |   84 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 86 insertions(+)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 60d673e15632..1f50c1e586e7 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -463,6 +463,8 @@ int apply_workqueue_attrs(struct workqueue_struct *wq,
 
 extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
 			struct work_struct *work);
+extern bool queue_work_node(int node, struct workqueue_struct *wq,
+			    struct work_struct *work);
 extern bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
 			struct delayed_work *work, unsigned long delay);
 extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0280deac392e..6ed7c2eb84b0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1492,6 +1492,90 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
 }
 EXPORT_SYMBOL(queue_work_on);
 
+/**
+ * workqueue_select_cpu_near - Select a CPU based on NUMA node
+ * @node: NUMA node ID that we want to bind a CPU from
+ *
+ * This function will attempt to find a "random" cpu available on a given
+ * node. If there are no CPUs available on the given node it will return
+ * WORK_CPU_UNBOUND indicating that we should just schedule to any
+ * available CPU if we need to schedule this work.
+ */
+static int workqueue_select_cpu_near(int node)
+{
+	int cpu;
+
+	/* No point in doing this if NUMA isn't enabled for workqueues */
+	if (!wq_numa_enabled)
+		return WORK_CPU_UNBOUND;
+
+	/* Delay binding to CPU if node is not valid or online */
+	if (node < 0 || node >= MAX_NUMNODES || !node_online(node))
+		return WORK_CPU_UNBOUND;
+
+	/* Use local node/cpu if we are already there */
+	cpu = raw_smp_processor_id();
+	if (node == cpu_to_node(cpu))
+		return cpu;
+
+	/* Use "random" otherwise know as "first" online CPU of node */
+	cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
+
+	/* If CPU is valid return that, otherwise just defer */
+	return (cpu < nr_cpu_ids) ? cpu : WORK_CPU_UNBOUND;
+}
+
+/**
+ * queue_work_node - queue work on a "random" cpu for a given NUMA node
+ * @node: NUMA node that we are targeting the work for
+ * @wq: workqueue to use
+ * @work: work to queue
+ *
+ * We queue the work to a "random" CPU within a given NUMA node. The basic
+ * idea here is to provide a way to somehow associate work with a given
+ * NUMA node.
+ *
+ * This function will only make a best effort attempt at getting this onto
+ * the right NUMA node. If no node is requested or the requested node is
+ * offline then we just fall back to standard queue_work behavior.
+ *
+ * Currently the "random" CPU ends up being the first available CPU in the
+ * intersection of cpu_online_mask and the cpumask of the node, unless we
+ * are running on the node. In that case we just use the current CPU.
+ *
+ * Return: %false if @work was already on a queue, %true otherwise.
+ */
+bool queue_work_node(int node, struct workqueue_struct *wq,
+		     struct work_struct *work)
+{
+	unsigned long flags;
+	bool ret = false;
+
+	/*
+	 * This current implementation is specific to unbound workqueues.
+	 * Specifically we only return the first available CPU for a given
+	 * node instead of cycling through individual CPUs within the node.
+	 *
+	 * If this is used with a per-cpu workqueue then the logic in
+	 * workqueue_select_cpu_near would need to be updated to allow for
+	 * some round robin type logic.
+	 */
+	WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND));
+
+	local_irq_save(flags);
+
+	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
+		int cpu = workqueue_select_cpu_near(node);
+
+		__queue_work(cpu, wq, work);
+		ret = true;
+	}
+
+	local_irq_restore(flags);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(queue_work_node);
+
 void delayed_work_timer_fn(struct timer_list *t)
 {
 	struct delayed_work *dwork = from_timer(dwork, t, timer);


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 2/6] async: Add support for queueing on specific NUMA node
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 1/6] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This patch introduces four new variants of the async_schedule_ functions
that allow scheduling on a specific NUMA node.

The first two functions are async_schedule_near and
async_schedule_near_domain which end up mapping to async_schedule and
async_schedule_domain but provide NUMA node specific functionality. They
replace the original functions which were moved to inline function
definitions that call the new functions while passing NUMA_NO_NODE.

The second two functions are async_schedule_dev and
async_schedule_dev_domain which provide NUMA specific functionality when
passing a device as the data member and that device has a NUMA node other
than NUMA_NO_NODE.

The main motivation behind this is to address the need to be able to
schedule device specific init work on specific NUMA nodes in order to
improve performance of memory initialization.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 include/linux/async.h |   84 +++++++++++++++++++++++++++++++++++++++++++++++--
 kernel/async.c        |   53 +++++++++++++++++--------------
 2 files changed, 110 insertions(+), 27 deletions(-)

diff --git a/include/linux/async.h b/include/linux/async.h
index 6b0226bdaadc..98a94e6e367d 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -14,6 +14,8 @@
 
 #include <linux/types.h>
 #include <linux/list.h>
+#include <linux/numa.h>
+#include <linux/device.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@@ -37,9 +39,85 @@ struct async_domain {
 	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
 				      .registered = 0 }
 
-extern async_cookie_t async_schedule(async_func_t func, void *data);
-extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
-					    struct async_domain *domain);
+async_cookie_t async_schedule_node(async_func_t func, void *data,
+				   int node);
+async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+					  int node,
+					  struct async_domain *domain);
+
+/**
+ * async_schedule - schedule a function for asynchronous execution
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t async_schedule(async_func_t func, void *data)
+{
+	return async_schedule_node(func, data, NUMA_NO_NODE);
+}
+
+/**
+ * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.  A
+ * synchronization domain is specified via @domain.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_domain(async_func_t func, void *data,
+		      struct async_domain *domain)
+{
+	return async_schedule_node_domain(func, data, NUMA_NO_NODE, domain);
+}
+
+/**
+ * async_schedule_dev - A device specific version of async_schedule
+ * @func: function to execute asynchronously
+ * @dev: device argument to be passed to function
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @dev is used as both the argument for the function and to provide NUMA
+ * context for where to run the function. By doing this we can try to
+ * provide for the best possible outcome by operating on the device on the
+ * CPUs closest to the device.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_dev(async_func_t func, struct device *dev)
+{
+	return async_schedule_node(func, dev, dev_to_node(dev));
+}
+
+/**
+ * async_schedule_dev_domain - A device specific version of async_schedule_domain
+ * @func: function to execute asynchronously
+ * @dev: device argument to be passed to function
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @dev is used as both the argument for the function and to provide NUMA
+ * context for where to run the function. By doing this we can try to
+ * provide for the best possible outcome by operating on the device on the
+ * CPUs closest to the device.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.  A
+ * synchronization domain is specified via @domain.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_dev_domain(async_func_t func, struct device *dev,
+			  struct async_domain *domain)
+{
+	return async_schedule_node_domain(func, dev, dev_to_node(dev), domain);
+}
+
 void async_unregister_domain(struct async_domain *domain);
 extern void async_synchronize_full(void);
 extern void async_synchronize_full_domain(struct async_domain *domain);
diff --git a/kernel/async.c b/kernel/async.c
index a893d6170944..23cf67b4b4f8 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -149,7 +149,25 @@ static void async_run_entry_fn(struct work_struct *work)
 	wake_up(&async_done);
 }
 
-static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
+/**
+ * async_schedule_node_domain - NUMA specific version of async_schedule_domain
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ * @node: NUMA node that we want to schedule this on or close to
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.  A
+ * synchronization domain is specified via @domain.  Note: This function
+ * may be called from atomic or non-atomic contexts.
+ *
+ * The node requested will be honored on a best effort basis. If the node
+ * has no CPUs associated with it then the work is distributed among all
+ * available CPUs.
+ */
+async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+					  int node, struct async_domain *domain)
 {
 	struct async_entry *entry;
 	unsigned long flags;
@@ -195,43 +213,30 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
 	current->flags |= PF_USED_ASYNC;
 
 	/* schedule for execution */
-	queue_work(system_unbound_wq, &entry->work);
+	queue_work_node(node, system_unbound_wq, &entry->work);
 
 	return newcookie;
 }
+EXPORT_SYMBOL_GPL(async_schedule_node_domain);
 
 /**
- * async_schedule - schedule a function for asynchronous execution
+ * async_schedule_node - NUMA specific version of async_schedule
  * @func: function to execute asynchronously
  * @data: data pointer to pass to the function
+ * @node: NUMA node that we want to schedule this on or close to
  *
  * Returns an async_cookie_t that may be used for checkpointing later.
  * Note: This function may be called from atomic or non-atomic contexts.
- */
-async_cookie_t async_schedule(async_func_t func, void *data)
-{
-	return __async_schedule(func, data, &async_dfl_domain);
-}
-EXPORT_SYMBOL_GPL(async_schedule);
-
-/**
- * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
- * @func: function to execute asynchronously
- * @data: data pointer to pass to the function
- * @domain: the domain
  *
- * Returns an async_cookie_t that may be used for checkpointing later.
- * @domain may be used in the async_synchronize_*_domain() functions to
- * wait within a certain synchronization domain rather than globally.  A
- * synchronization domain is specified via @domain.  Note: This function
- * may be called from atomic or non-atomic contexts.
+ * The node requested will be honored on a best effort basis. If the node
+ * has no CPUs associated with it then the work is distributed among all
+ * available CPUs.
  */
-async_cookie_t async_schedule_domain(async_func_t func, void *data,
-				     struct async_domain *domain)
+async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
 {
-	return __async_schedule(func, data, domain);
+	return async_schedule_node_domain(func, data, node, &async_dfl_domain);
 }
-EXPORT_SYMBOL_GPL(async_schedule_domain);
+EXPORT_SYMBOL_GPL(async_schedule_node);
 
 /**
  * async_synchronize_full - synchronize all asynchronous function calls


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 1/6] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 2/6] async: Add support for queueing on specific " Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  2018-10-18  7:46   ` Rafael J. Wysocki
  2018-10-18 17:53   ` Bart Van Assche
  2018-10-15 15:09 ` [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This patch is meant to try and consolidate all of the locking and unlocking
of both the parent and device when attaching or removing a driver from a
given device.

To do that I first consolidated the lock pattern into two functions
__device_driver_lock and __device_driver_unlock. After doing that I then
created functions specific to attaching and detaching the driver while
acquiring this locks. By doing this I was able to reduce the number of
spots where we touch need_parent_lock from 12 down to 4.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/base.h |    2 +
 drivers/base/bus.c  |   23 ++------------
 drivers/base/dd.c   |   83 ++++++++++++++++++++++++++++++++++++++++++---------
 3 files changed, 75 insertions(+), 33 deletions(-)

diff --git a/drivers/base/base.h b/drivers/base/base.h
index 7a419a7a6235..3f22ebd6117a 100644
--- a/drivers/base/base.h
+++ b/drivers/base/base.h
@@ -124,6 +124,8 @@ extern int driver_add_groups(struct device_driver *drv,
 			     const struct attribute_group **groups);
 extern void driver_remove_groups(struct device_driver *drv,
 				 const struct attribute_group **groups);
+int device_driver_attach(struct device_driver *drv, struct device *dev);
+void device_driver_detach(struct device *dev);
 
 extern char *make_class_name(const char *name, struct kobject *kobj);
 
diff --git a/drivers/base/bus.c b/drivers/base/bus.c
index 8bfd27ec73d6..8a630f9bd880 100644
--- a/drivers/base/bus.c
+++ b/drivers/base/bus.c
@@ -184,11 +184,7 @@ static ssize_t unbind_store(struct device_driver *drv, const char *buf,
 
 	dev = bus_find_device_by_name(bus, NULL, buf);
 	if (dev && dev->driver == drv) {
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_release_driver(dev);
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_unlock(dev->parent);
+		device_driver_detach(dev);
 		err = count;
 	}
 	put_device(dev);
@@ -211,13 +207,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
 
 	dev = bus_find_device_by_name(bus, NULL, buf);
 	if (dev && dev->driver == NULL && driver_match_device(drv, dev)) {
-		if (dev->parent && bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_lock(dev);
-		err = driver_probe_device(drv, dev);
-		device_unlock(dev);
-		if (dev->parent && bus->need_parent_lock)
-			device_unlock(dev->parent);
+		err = device_driver_attach(drv, dev);
 
 		if (err > 0) {
 			/* success */
@@ -769,13 +759,8 @@ int bus_rescan_devices(struct bus_type *bus)
  */
 int device_reprobe(struct device *dev)
 {
-	if (dev->driver) {
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_release_driver(dev);
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_unlock(dev->parent);
-	}
+	if (dev->driver)
+		device_driver_detach(dev);
 	return bus_rescan_devices_helper(dev, NULL);
 }
 EXPORT_SYMBOL_GPL(device_reprobe);
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 169412ee4ae8..e845cd2a87af 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -864,6 +864,60 @@ void device_initial_probe(struct device *dev)
 	__device_attach(dev, true);
 }
 
+/*
+ * __device_driver_lock - acquire locks needed to manipulate dev->drv
+ * @dev: Device we will update driver info for
+ * @parent: Parent device needed if the bus requires parent lock
+ *
+ * This function will take the required locks for manipulating dev->drv.
+ * Normally this will just be the @dev lock, but when called for a USB
+ * interface, @parent lock will be held as well.
+ */
+static void __device_driver_lock(struct device *dev, struct device *parent)
+{
+	if (parent && dev->bus->need_parent_lock)
+		device_lock(parent);
+	device_lock(dev);
+}
+
+/*
+ * __device_driver_lock - release locks needed to manipulate dev->drv
+ * @dev: Device we will update driver info for
+ * @parent: Parent device needed if the bus requires parent lock
+ *
+ * This function will release the required locks for manipulating dev->drv.
+ * Normally this will just be the the @dev lock, but when called for a
+ * USB interface, @parent lock will be released as well.
+ */
+static void __device_driver_unlock(struct device *dev, struct device *parent)
+{
+	device_unlock(dev);
+	if (parent && dev->bus->need_parent_lock)
+		device_unlock(parent);
+}
+
+/**
+ * device_driver_attach - attach a specific driver to a specific device
+ * @drv: Driver to attach
+ * @dev: Device to attach it to
+ *
+ * Manually attach driver to a device. Will acquire both @dev lock and
+ * @dev->parent lock if needed.
+ */
+int device_driver_attach(struct device_driver *drv, struct device *dev)
+{
+	int ret = 0;
+
+	__device_driver_lock(dev, dev->parent);
+
+	if (!dev->driver)
+		ret = driver_probe_device(drv, dev);
+
+	__device_driver_unlock(dev, dev->parent);
+
+	return ret;
+}
+
 static int __driver_attach(struct device *dev, void *data)
 {
 	struct device_driver *drv = data;
@@ -891,14 +945,7 @@ static int __driver_attach(struct device *dev, void *data)
 		return ret;
 	} /* ret > 0 means positive match */
 
-	if (dev->parent && dev->bus->need_parent_lock)
-		device_lock(dev->parent);
-	device_lock(dev);
-	if (!dev->driver)
-		driver_probe_device(drv, dev);
-	device_unlock(dev);
-	if (dev->parent && dev->bus->need_parent_lock)
-		device_unlock(dev->parent);
+	device_driver_attach(drv, dev);
 
 	return 0;
 }
@@ -993,16 +1040,12 @@ void device_release_driver_internal(struct device *dev,
 				    struct device_driver *drv,
 				    struct device *parent)
 {
-	if (parent && dev->bus->need_parent_lock)
-		device_lock(parent);
+	__device_driver_lock(dev, parent);
 
-	device_lock(dev);
 	if (!drv || drv == dev->driver)
 		__device_release_driver(dev, parent);
 
-	device_unlock(dev);
-	if (parent && dev->bus->need_parent_lock)
-		device_unlock(parent);
+	__device_driver_unlock(dev, parent);
 }
 
 /**
@@ -1028,6 +1071,18 @@ void device_release_driver(struct device *dev)
 EXPORT_SYMBOL_GPL(device_release_driver);
 
 /**
+ * device_driver_detach - detach driver from a specific device
+ * @dev: device to detach driver from
+ *
+ * Manually detach driver from device. Will acquire both @dev lock and
+ * @dev->parent lock if needed.
+ */
+void device_driver_detach(struct device *dev)
+{
+	device_release_driver_internal(dev, NULL, dev->parent);
+}
+
+/**
  * driver_detach - detach driver from all devices it controls.
  * @drv: driver.
  */


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (2 preceding siblings ...)
  2018-10-15 15:09 ` [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  2018-10-18 18:11   ` Bart Van Assche
  2018-10-15 15:09 ` [driver-core PATCH v4 5/6] driver core: Attach devices on CPU local to device node Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 6/6] PM core: Use new async_schedule_dev command Alexander Duyck
  5 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This change makes it so that we probe devices asynchronously instead of the
driver. This results in us seeing the same behavior if the device is
registered before the driver or after. This way we can avoid serializing
the initialization should the driver not be loaded until after the devices
have already been added.

The motivation behind this is that if we have a set of devices that
take a significant amount of time to load we can greatly reduce the time to
load by processing them in parallel instead of one at a time. In addition,
each device can exist on a different node so placing a single thread on one
CPU to initialize all of the devices for a given driver can result in poor
performance on a system with multiple nodes.

I am using the driver_data member of the device struct to store the driver
pointer while we wait on the deferred probe call. This should be safe to do
as the value will either be set to NULL on a failed probe or driver load
followed by unload, or the driver value itself will be set on a successful
driver load.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/bus.c     |   23 +++--------------------
 drivers/base/dd.c      |   45 +++++++++++++++++++++++++++++++++++++++++++++
 include/linux/device.h |    4 +++-
 3 files changed, 51 insertions(+), 21 deletions(-)

diff --git a/drivers/base/bus.c b/drivers/base/bus.c
index 8a630f9bd880..0cd2eadd0816 100644
--- a/drivers/base/bus.c
+++ b/drivers/base/bus.c
@@ -606,17 +606,6 @@ static ssize_t uevent_store(struct device_driver *drv, const char *buf,
 }
 static DRIVER_ATTR_WO(uevent);
 
-static void driver_attach_async(void *_drv, async_cookie_t cookie)
-{
-	struct device_driver *drv = _drv;
-	int ret;
-
-	ret = driver_attach(drv);
-
-	pr_debug("bus: '%s': driver %s async attach completed: %d\n",
-		 drv->bus->name, drv->name, ret);
-}
-
 /**
  * bus_add_driver - Add a driver to the bus.
  * @drv: driver.
@@ -649,15 +638,9 @@ int bus_add_driver(struct device_driver *drv)
 
 	klist_add_tail(&priv->knode_bus, &bus->p->klist_drivers);
 	if (drv->bus->p->drivers_autoprobe) {
-		if (driver_allows_async_probing(drv)) {
-			pr_debug("bus: '%s': probing driver %s asynchronously\n",
-				drv->bus->name, drv->name);
-			async_schedule(driver_attach_async, drv);
-		} else {
-			error = driver_attach(drv);
-			if (error)
-				goto out_unregister;
-		}
+		error = driver_attach(drv);
+		if (error)
+			goto out_unregister;
 	}
 	module_add_driver(drv->owner, drv);
 
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index e845cd2a87af..c33f893ec9d8 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -802,6 +802,7 @@ static int __device_attach(struct device *dev, bool allow_async)
 			ret = 1;
 		else {
 			dev->driver = NULL;
+			dev_set_drvdata(dev, NULL);
 			ret = 0;
 		}
 	} else {
@@ -918,6 +919,31 @@ int device_driver_attach(struct device_driver *drv, struct device *dev)
 	return ret;
 }
 
+static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
+{
+	struct device *dev = _dev;
+
+	__device_driver_lock(dev, dev->parent);
+
+	/*
+	 * If someone attempted to bind a driver either successfully or
+	 * unsuccessfully before we got here we should just skip the driver
+	 * probe call.
+	 */
+	if (!dev->driver) {
+		struct device_driver *drv = dev_get_drvdata(dev);
+
+		if (drv)
+			driver_probe_device(drv, dev);
+	}
+
+	__device_driver_unlock(dev, dev->parent);
+
+	put_device(dev);
+
+	dev_dbg(dev, "async probe completed\n");
+}
+
 static int __driver_attach(struct device *dev, void *data)
 {
 	struct device_driver *drv = data;
@@ -945,6 +971,25 @@ static int __driver_attach(struct device *dev, void *data)
 		return ret;
 	} /* ret > 0 means positive match */
 
+	if (driver_allows_async_probing(drv)) {
+		/*
+		 * Instead of probing the device synchronously we will
+		 * probe it asynchronously to allow for more parallelism.
+		 *
+		 * We only take the device lock here in order to guarantee
+		 * that the dev->driver and driver_data fields are protected
+		 */
+		dev_dbg(dev, "scheduling asynchronous probe\n");
+		device_lock(dev);
+		if (!dev->driver) {
+			get_device(dev);
+			dev_set_drvdata(dev, drv);
+			async_schedule(__driver_attach_async_helper, dev);
+		}
+		device_unlock(dev);
+		return 0;
+	}
+
 	device_driver_attach(drv, dev);
 
 	return 0;
diff --git a/include/linux/device.h b/include/linux/device.h
index 90224e75ade4..b0abb04c29dc 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -906,7 +906,9 @@ struct dev_links_info {
  * 		variants, which GPIO pins act in what additional roles, and so
  * 		on.  This shrinks the "Board Support Packages" (BSPs) and
  * 		minimizes board-specific #ifdefs in drivers.
- * @driver_data: Private pointer for driver specific info.
+ * @driver_data: Private pointer for driver specific info if driver is
+ *		non-NULL. Pointer to deferred driver to be attached if driver
+ *		is NULL.
  * @links:	Links to suppliers and consumers of this device.
  * @power:	For device power management.
  *		See Documentation/driver-api/pm/devices.rst for details.


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 5/6] driver core: Attach devices on CPU local to device node
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (3 preceding siblings ...)
  2018-10-15 15:09 ` [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  2018-10-15 15:09 ` [driver-core PATCH v4 6/6] PM core: Use new async_schedule_dev command Alexander Duyck
  5 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This change makes it so that we call the asynchronous probe routines on a
CPU local to the device node. By doing this we should be able to improve
our initialization time significantly as we can avoid having to access the
device from a remote node which may introduce higher latency.

For example, in the case of initializing memory for NVDIMM this can have a
singifcant impact as initialing 3TB on remote node can take up to 39
seconds while initialing it on a local node only takes 23 seconds. It is
situations like this where we will see the biggest improvement.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/dd.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index c33f893ec9d8..65cfdd2b00ed 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -827,7 +827,7 @@ static int __device_attach(struct device *dev, bool allow_async)
 			 */
 			dev_dbg(dev, "scheduling asynchronous probe\n");
 			get_device(dev);
-			async_schedule(__device_attach_async_helper, dev);
+			async_schedule_dev(__device_attach_async_helper, dev);
 		} else {
 			pm_request_idle(dev);
 		}
@@ -984,7 +984,7 @@ static int __driver_attach(struct device *dev, void *data)
 		if (!dev->driver) {
 			get_device(dev);
 			dev_set_drvdata(dev, drv);
-			async_schedule(__driver_attach_async_helper, dev);
+			async_schedule_dev(__driver_attach_async_helper, dev);
 		}
 		device_unlock(dev);
 		return 0;


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [driver-core PATCH v4 6/6] PM core: Use new async_schedule_dev command
  2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (4 preceding siblings ...)
  2018-10-15 15:09 ` [driver-core PATCH v4 5/6] driver core: Attach devices on CPU local to device node Alexander Duyck
@ 2018-10-15 15:09 ` Alexander Duyck
  5 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-15 15:09 UTC (permalink / raw)
  To: gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj,
	akpm, alexander.h.duyck

This change makes it so that we use the device specific version of the
async_schedule commands to defer various tasks related to power management.
By doing this we should see a slight improvement in performance as any
device that is sensitive to latency/locality in the setup will now be
initializing on the node closest to the device.

Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/power/main.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index a690fd400260..ebb8b61b52e9 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -726,7 +726,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_noirq, dev);
+			async_schedule_dev(async_resume_noirq, dev);
 		}
 	}
 
@@ -883,7 +883,7 @@ void dpm_resume_early(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_early, dev);
+			async_schedule_dev(async_resume_early, dev);
 		}
 	}
 
@@ -1047,7 +1047,7 @@ void dpm_resume(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume, dev);
+			async_schedule_dev(async_resume, dev);
 		}
 	}
 
@@ -1366,7 +1366,7 @@ static int device_suspend_noirq(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_noirq, dev);
+		async_schedule_dev(async_suspend_noirq, dev);
 		return 0;
 	}
 	return __device_suspend_noirq(dev, pm_transition, false);
@@ -1569,7 +1569,7 @@ static int device_suspend_late(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_late, dev);
+		async_schedule_dev(async_suspend_late, dev);
 		return 0;
 	}
 
@@ -1833,7 +1833,7 @@ static int device_suspend(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend, dev);
+		async_schedule_dev(async_suspend, dev);
 		return 0;
 	}
 


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device
  2018-10-15 15:09 ` [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
@ 2018-10-18  7:46   ` Rafael J. Wysocki
  2018-10-18 17:53   ` Bart Van Assche
  1 sibling, 0 replies; 15+ messages in thread
From: Rafael J. Wysocki @ 2018-10-18  7:46 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, Len Brown,
	Rafael J. Wysocki, Linux PM, Lai Jiangshan, Pavel Machek,
	zwisler, Tejun Heo, Andrew Morton

On Mon, Oct 15, 2018 at 5:09 PM Alexander Duyck
<alexander.h.duyck@linux.intel.com> wrote:
>
> This patch is meant to try and consolidate all of the locking and unlocking
> of both the parent and device when attaching or removing a driver from a
> given device.
>
> To do that I first consolidated the lock pattern into two functions
> __device_driver_lock and __device_driver_unlock. After doing that I then
> created functions specific to attaching and detaching the driver while
> acquiring this locks. By doing this I was able to reduce the number of
> spots where we touch need_parent_lock from 12 down to 4.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

> ---
>  drivers/base/base.h |    2 +
>  drivers/base/bus.c  |   23 ++------------
>  drivers/base/dd.c   |   83 ++++++++++++++++++++++++++++++++++++++++++---------
>  3 files changed, 75 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/base/base.h b/drivers/base/base.h
> index 7a419a7a6235..3f22ebd6117a 100644
> --- a/drivers/base/base.h
> +++ b/drivers/base/base.h
> @@ -124,6 +124,8 @@ extern int driver_add_groups(struct device_driver *drv,
>                              const struct attribute_group **groups);
>  extern void driver_remove_groups(struct device_driver *drv,
>                                  const struct attribute_group **groups);
> +int device_driver_attach(struct device_driver *drv, struct device *dev);
> +void device_driver_detach(struct device *dev);
>
>  extern char *make_class_name(const char *name, struct kobject *kobj);
>
> diff --git a/drivers/base/bus.c b/drivers/base/bus.c
> index 8bfd27ec73d6..8a630f9bd880 100644
> --- a/drivers/base/bus.c
> +++ b/drivers/base/bus.c
> @@ -184,11 +184,7 @@ static ssize_t unbind_store(struct device_driver *drv, const char *buf,
>
>         dev = bus_find_device_by_name(bus, NULL, buf);
>         if (dev && dev->driver == drv) {
> -               if (dev->parent && dev->bus->need_parent_lock)
> -                       device_lock(dev->parent);
> -               device_release_driver(dev);
> -               if (dev->parent && dev->bus->need_parent_lock)
> -                       device_unlock(dev->parent);
> +               device_driver_detach(dev);
>                 err = count;
>         }
>         put_device(dev);
> @@ -211,13 +207,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
>
>         dev = bus_find_device_by_name(bus, NULL, buf);
>         if (dev && dev->driver == NULL && driver_match_device(drv, dev)) {
> -               if (dev->parent && bus->need_parent_lock)
> -                       device_lock(dev->parent);
> -               device_lock(dev);
> -               err = driver_probe_device(drv, dev);
> -               device_unlock(dev);
> -               if (dev->parent && bus->need_parent_lock)
> -                       device_unlock(dev->parent);
> +               err = device_driver_attach(drv, dev);
>
>                 if (err > 0) {
>                         /* success */
> @@ -769,13 +759,8 @@ int bus_rescan_devices(struct bus_type *bus)
>   */
>  int device_reprobe(struct device *dev)
>  {
> -       if (dev->driver) {
> -               if (dev->parent && dev->bus->need_parent_lock)
> -                       device_lock(dev->parent);
> -               device_release_driver(dev);
> -               if (dev->parent && dev->bus->need_parent_lock)
> -                       device_unlock(dev->parent);
> -       }
> +       if (dev->driver)
> +               device_driver_detach(dev);
>         return bus_rescan_devices_helper(dev, NULL);
>  }
>  EXPORT_SYMBOL_GPL(device_reprobe);
> diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> index 169412ee4ae8..e845cd2a87af 100644
> --- a/drivers/base/dd.c
> +++ b/drivers/base/dd.c
> @@ -864,6 +864,60 @@ void device_initial_probe(struct device *dev)
>         __device_attach(dev, true);
>  }
>
> +/*
> + * __device_driver_lock - acquire locks needed to manipulate dev->drv
> + * @dev: Device we will update driver info for
> + * @parent: Parent device needed if the bus requires parent lock
> + *
> + * This function will take the required locks for manipulating dev->drv.
> + * Normally this will just be the @dev lock, but when called for a USB
> + * interface, @parent lock will be held as well.
> + */
> +static void __device_driver_lock(struct device *dev, struct device *parent)
> +{
> +       if (parent && dev->bus->need_parent_lock)
> +               device_lock(parent);
> +       device_lock(dev);
> +}
> +
> +/*
> + * __device_driver_lock - release locks needed to manipulate dev->drv
> + * @dev: Device we will update driver info for
> + * @parent: Parent device needed if the bus requires parent lock
> + *
> + * This function will release the required locks for manipulating dev->drv.
> + * Normally this will just be the the @dev lock, but when called for a
> + * USB interface, @parent lock will be released as well.
> + */
> +static void __device_driver_unlock(struct device *dev, struct device *parent)
> +{
> +       device_unlock(dev);
> +       if (parent && dev->bus->need_parent_lock)
> +               device_unlock(parent);
> +}
> +
> +/**
> + * device_driver_attach - attach a specific driver to a specific device
> + * @drv: Driver to attach
> + * @dev: Device to attach it to
> + *
> + * Manually attach driver to a device. Will acquire both @dev lock and
> + * @dev->parent lock if needed.
> + */
> +int device_driver_attach(struct device_driver *drv, struct device *dev)
> +{
> +       int ret = 0;
> +
> +       __device_driver_lock(dev, dev->parent);
> +
> +       if (!dev->driver)
> +               ret = driver_probe_device(drv, dev);
> +
> +       __device_driver_unlock(dev, dev->parent);
> +
> +       return ret;
> +}
> +
>  static int __driver_attach(struct device *dev, void *data)
>  {
>         struct device_driver *drv = data;
> @@ -891,14 +945,7 @@ static int __driver_attach(struct device *dev, void *data)
>                 return ret;
>         } /* ret > 0 means positive match */
>
> -       if (dev->parent && dev->bus->need_parent_lock)
> -               device_lock(dev->parent);
> -       device_lock(dev);
> -       if (!dev->driver)
> -               driver_probe_device(drv, dev);
> -       device_unlock(dev);
> -       if (dev->parent && dev->bus->need_parent_lock)
> -               device_unlock(dev->parent);
> +       device_driver_attach(drv, dev);
>
>         return 0;
>  }
> @@ -993,16 +1040,12 @@ void device_release_driver_internal(struct device *dev,
>                                     struct device_driver *drv,
>                                     struct device *parent)
>  {
> -       if (parent && dev->bus->need_parent_lock)
> -               device_lock(parent);
> +       __device_driver_lock(dev, parent);
>
> -       device_lock(dev);
>         if (!drv || drv == dev->driver)
>                 __device_release_driver(dev, parent);
>
> -       device_unlock(dev);
> -       if (parent && dev->bus->need_parent_lock)
> -               device_unlock(parent);
> +       __device_driver_unlock(dev, parent);
>  }
>
>  /**
> @@ -1028,6 +1071,18 @@ void device_release_driver(struct device *dev)
>  EXPORT_SYMBOL_GPL(device_release_driver);
>
>  /**
> + * device_driver_detach - detach driver from a specific device
> + * @dev: device to detach driver from
> + *
> + * Manually detach driver from device. Will acquire both @dev lock and
> + * @dev->parent lock if needed.
> + */
> +void device_driver_detach(struct device *dev)
> +{
> +       device_release_driver_internal(dev, NULL, dev->parent);
> +}
> +
> +/**
>   * driver_detach - detach driver from all devices it controls.
>   * @drv: driver.
>   */
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device
  2018-10-15 15:09 ` [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
  2018-10-18  7:46   ` Rafael J. Wysocki
@ 2018-10-18 17:53   ` Bart Van Assche
  1 sibling, 0 replies; 15+ messages in thread
From: Bart Van Assche @ 2018-10-18 17:53 UTC (permalink / raw)
  To: Alexander Duyck, gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj, akpm

On Mon, 2018-10-15 at 08:09 -0700, Alexander Duyck wrote:
> This patch is meant to try and consolidate all of the locking and unlocking
> of both the parent and device when attaching or removing a driver from a
> given device.
> 
> To do that I first consolidated the lock pattern into two functions
> __device_driver_lock and __device_driver_unlock. After doing that I then
> created functions specific to attaching and detaching the driver while
> acquiring this locks. By doing this I was able to reduce the number of
            ^^^^
Should "this" perhaps be changed into "these"?
 
> +/*
> + * __device_driver_lock - acquire locks needed to manipulate dev->drv
> + * @dev: Device we will update driver info for
> + * @parent: Parent device needed if the bus requires parent lock

Please consider splitting that description into two sentences to enhance
clarity, e.g. "Parent device. Needed if the bus requires parent lock."

> + * @parent: Parent device needed if the bus requires parent lock

Same comment here.

>  /**
> + * device_driver_detach - detach driver from a specific device
> + * @dev: device to detach driver from
> + *
> + * Manually detach driver from device. Will acquire both @dev lock and
> + * @dev->parent lock if needed.
> + */
> +void device_driver_detach(struct device *dev)
> +{
> +	device_release_driver_internal(dev, NULL, dev->parent);
> +}
> +
> +/**
>   * driver_detach - detach driver from all devices it controls.
>   * @drv: driver.
>   */

Elsewhere in the driver core the word "manually" only appears in comments
above exported functions and functions that implement sysfs store methods.
Since device_driver_detach() is neither, I think the word "manually" should
not be used in the comment block above that function. But since the rest of
this patch looks fine to me, feel free to add:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-15 15:09 ` [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
@ 2018-10-18 18:11   ` Bart Van Assche
  2018-10-18 19:38     ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2018-10-18 18:11 UTC (permalink / raw)
  To: Alexander Duyck, gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj, akpm

On Mon, 2018-10-15 at 08:09 -0700, Alexander Duyck wrote:
> +static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
> +{
> +	struct device *dev = _dev;
> +
> +	__device_driver_lock(dev, dev->parent);
> +
> +	/*
> +	 * If someone attempted to bind a driver either successfully or
> +	 * unsuccessfully before we got here we should just skip the driver
> +	 * probe call.
> +	 */
> +	if (!dev->driver) {
> +		struct device_driver *drv = dev_get_drvdata(dev);
> +
> +		if (drv)
> +			driver_probe_device(drv, dev);
> +	}
> +
> +	__device_driver_unlock(dev, dev->parent);
> +
> +	put_device(dev);
> +
> +	dev_dbg(dev, "async probe completed\n");
> +}
> +
>  static int __driver_attach(struct device *dev, void *data)
>  {
>  	struct device_driver *drv = data;
> @@ -945,6 +971,25 @@ static int __driver_attach(struct device *dev, void *data)
>  		return ret;
>  	} /* ret > 0 means positive match */
>  
> +	if (driver_allows_async_probing(drv)) {
> +		/*
> +		 * Instead of probing the device synchronously we will
> +		 * probe it asynchronously to allow for more parallelism.
> +		 *
> +		 * We only take the device lock here in order to guarantee
> +		 * that the dev->driver and driver_data fields are protected
> +		 */
> +		dev_dbg(dev, "scheduling asynchronous probe\n");
> +		device_lock(dev);
> +		if (!dev->driver) {
> +			get_device(dev);
> +			dev_set_drvdata(dev, drv);
> +			async_schedule(__driver_attach_async_helper, dev);
> +		}
> +		device_unlock(dev);
> +		return 0;
> +	}
> +
>  	device_driver_attach(drv, dev);

What prevents that the driver pointer becomes invalid after async_schedule() has
been called and before __driver_attach_async_helper() is called? I think we need
protection against concurrent driver_unregister() and __driver_attach_async_helper()
calls. I'm not sure whether that is possible without introducing a new mutex.

Thanks,

Bart.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-18 18:11   ` Bart Van Assche
@ 2018-10-18 19:38     ` Alexander Duyck
  2018-10-18 20:13       ` Bart Van Assche
  0 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2018-10-18 19:38 UTC (permalink / raw)
  To: Bart Van Assche, gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj, akpm

On 10/18/2018 11:11 AM, Bart Van Assche wrote:
> On Mon, 2018-10-15 at 08:09 -0700, Alexander Duyck wrote:
>> +static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
>> +{
>> +	struct device *dev = _dev;
>> +
>> +	__device_driver_lock(dev, dev->parent);
>> +
>> +	/*
>> +	 * If someone attempted to bind a driver either successfully or
>> +	 * unsuccessfully before we got here we should just skip the driver
>> +	 * probe call.
>> +	 */

The answer to your question below is up here.

>> +	if (!dev->driver) {
>> +		struct device_driver *drv = dev_get_drvdata(dev);
>> +
>> +		if (drv)
>> +			driver_probe_device(drv, dev);
>> +	}
>> +
>> +	__device_driver_unlock(dev, dev->parent);
>> +
>> +	put_device(dev);
>> +
>> +	dev_dbg(dev, "async probe completed\n");
>> +}
>> +
>>   static int __driver_attach(struct device *dev, void *data)
>>   {
>>   	struct device_driver *drv = data;
>> @@ -945,6 +971,25 @@ static int __driver_attach(struct device *dev, void *data)
>>   		return ret;
>>   	} /* ret > 0 means positive match */
>>   
>> +	if (driver_allows_async_probing(drv)) {
>> +		/*
>> +		 * Instead of probing the device synchronously we will
>> +		 * probe it asynchronously to allow for more parallelism.
>> +		 *
>> +		 * We only take the device lock here in order to guarantee
>> +		 * that the dev->driver and driver_data fields are protected
>> +		 */
>> +		dev_dbg(dev, "scheduling asynchronous probe\n");
>> +		device_lock(dev);
>> +		if (!dev->driver) {
>> +			get_device(dev);
>> +			dev_set_drvdata(dev, drv);
>> +			async_schedule(__driver_attach_async_helper, dev);
>> +		}
>> +		device_unlock(dev);
>> +		return 0;
>> +	}
>> +
>>   	device_driver_attach(drv, dev);
> 
> What prevents that the driver pointer becomes invalid after async_schedule() has
> been called and before __driver_attach_async_helper() is called? I think we need
> protection against concurrent driver_unregister() and __driver_attach_async_helper()
> calls. I'm not sure whether that is possible without introducing a new mutex.
> 
> Thanks,
> 
> Bart.

See the spot called out above.

Basically if somebody loads a driver the dev->driver becomes set. If a 
driver is removed it will clear dev->driver and set driver_data to 
0/NULL. That is what I am using as a mutex to track it in conjunction 
with the device mutex. Basically if somebody attempts to attach a driver 
before we get there we just exit and don't attempt to load this driver.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-18 19:38     ` Alexander Duyck
@ 2018-10-18 20:13       ` Bart Van Assche
  2018-10-19  2:20         ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2018-10-18 20:13 UTC (permalink / raw)
  To: Alexander Duyck, gregkh, linux-kernel
  Cc: len.brown, rafael, linux-pm, jiangshanlai, pavel, zwisler, tj, akpm

On Thu, 2018-10-18 at 12:38 -0700, Alexander Duyck wrote:
> Basically if somebody loads a driver the dev->driver becomes set. If a 
> driver is removed it will clear dev->driver and set driver_data to 
> 0/NULL. That is what I am using as a mutex to track it in conjunction 
> with the device mutex. Basically if somebody attempts to attach a driver 
> before we get there we just exit and don't attempt to load this driver.

I don't think that the above matches your code. __device_attach() does not
set the dev->driver pointer before scheduling an asynchronous probe. Only
dev->driver_data gets set before the asynchonous probe is scheduled. Since
driver_detach() only iterates over devices that are in the per-driver klist
it will skip all devices for which an asynchronous probe has been scheduled
but __device_attach_async_helper() has not yet been called. My conclusion
remains that this patch does not prevent a driver pointer to become invalid
concurrently with __device_attach_async_helper() dereferencing the same
driver pointer.

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-18 20:13       ` Bart Van Assche
@ 2018-10-19  2:20         ` Alexander Duyck
  2018-10-19  2:31           ` Bart Van Assche
  0 siblings, 1 reply; 15+ messages in thread
From: Alexander Duyck @ 2018-10-19  2:20 UTC (permalink / raw)
  To: bvanassche
  Cc: alexander.h.duyck, Greg KH, LKML, len.brown, rafael, linux-pm,
	jiangshanlai, pavel, zwisler, Tejun Heo, Andrew Morton

On Thu, Oct 18, 2018 at 1:15 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On Thu, 2018-10-18 at 12:38 -0700, Alexander Duyck wrote:
> > Basically if somebody loads a driver the dev->driver becomes set. If a
> > driver is removed it will clear dev->driver and set driver_data to
> > 0/NULL. That is what I am using as a mutex to track it in conjunction
> > with the device mutex. Basically if somebody attempts to attach a driver
> > before we get there we just exit and don't attempt to load this driver.
>
> I don't think that the above matches your code. __device_attach() does not
> set the dev->driver pointer before scheduling an asynchronous probe. Only
> dev->driver_data gets set before the asynchonous probe is scheduled. Since
> driver_detach() only iterates over devices that are in the per-driver klist
> it will skip all devices for which an asynchronous probe has been scheduled
> but __device_attach_async_helper() has not yet been called. My conclusion
> remains that this patch does not prevent a driver pointer to become invalid
> concurrently with __device_attach_async_helper() dereferencing the same
> driver pointer.
>
> Bart.

I see what you are talking about now. Actually I think this was an
existing issue before my patch even came into play. Basically the code
as it currently stands is device specific in terms of the attach and
release code.

I wonder if we shouldn't have the async_synchronize_full call in
__device_release_driver moved down and into driver_detach before we
even start the for loop. Assuming the driver is no longer associated
with the bus that should flush out all devices so that we can then
pull them out of the devices list at least. I may look at adding an
additional bitflag to the device struct to indicate that it has a
driver attach pending. Then for things like races between any attach
and detach calls the logic becomes pretty straight forward. Attach
will set the bit and provide driver data, detach will clear the bit
and the driver data. If a driver loads in between it should clear the
bit as well.

I'll work on it over the next couple days and hopefully have something
ready for testing/review early next week.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-19  2:20         ` Alexander Duyck
@ 2018-10-19  2:31           ` Bart Van Assche
  2018-10-19 22:35             ` Alexander Duyck
  0 siblings, 1 reply; 15+ messages in thread
From: Bart Van Assche @ 2018-10-19  2:31 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: alexander.h.duyck, Greg KH, LKML, len.brown, rafael, linux-pm,
	jiangshanlai, pavel, zwisler, Tejun Heo, Andrew Morton

On 10/18/18 7:20 PM, Alexander Duyck wrote:
> I see what you are talking about now. Actually I think this was an
> existing issue before my patch even came into play. Basically the code
> as it currently stands is device specific in terms of the attach and
> release code.
> 
> I wonder if we shouldn't have the async_synchronize_full call in
> __device_release_driver moved down and into driver_detach before we
> even start the for loop. Assuming the driver is no longer associated
> with the bus that should flush out all devices so that we can then
> pull them out of the devices list at least. I may look at adding an
> additional bitflag to the device struct to indicate that it has a
> driver attach pending. Then for things like races between any attach
> and detach calls the logic becomes pretty straight forward. Attach
> will set the bit and provide driver data, detach will clear the bit
> and the driver data. If a driver loads in between it should clear the
> bit as well.
> 
> I'll work on it over the next couple days and hopefully have something
> ready for testing/review early next week.

Hi Alex,

How about checking in __driver_attach_async_helper() whether the driver 
pointer is still valid by checking whether bus_for_each_drv(dev->bus, 
...) can still find the driver pointer? That approach requires 
protection with a mutex to avoid races with the driver detach code but 
shouldn't require any new flags in struct device.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver
  2018-10-19  2:31           ` Bart Van Assche
@ 2018-10-19 22:35             ` Alexander Duyck
  0 siblings, 0 replies; 15+ messages in thread
From: Alexander Duyck @ 2018-10-19 22:35 UTC (permalink / raw)
  To: Bart Van Assche, Alexander Duyck
  Cc: Greg KH, LKML, len.brown, rafael, linux-pm, jiangshanlai, pavel,
	zwisler, Tejun Heo, Andrew Morton

On Thu, 2018-10-18 at 19:31 -0700, Bart Van Assche wrote:
> On 10/18/18 7:20 PM, Alexander Duyck wrote:
> > I see what you are talking about now. Actually I think this was an
> > existing issue before my patch even came into play. Basically the
> > code
> > as it currently stands is device specific in terms of the attach
> > and
> > release code.
> > 
> > I wonder if we shouldn't have the async_synchronize_full call in
> > __device_release_driver moved down and into driver_detach before we
> > even start the for loop. Assuming the driver is no longer
> > associated
> > with the bus that should flush out all devices so that we can then
> > pull them out of the devices list at least. I may look at adding an
> > additional bitflag to the device struct to indicate that it has a
> > driver attach pending. Then for things like races between any
> > attach
> > and detach calls the logic becomes pretty straight forward. Attach
> > will set the bit and provide driver data, detach will clear the bit
> > and the driver data. If a driver loads in between it should clear
> > the
> > bit as well.
> > 
> > I'll work on it over the next couple days and hopefully have
> > something
> > ready for testing/review early next week.
> 
> Hi Alex,
> 
> How about checking in __driver_attach_async_helper() whether the
> driver 
> pointer is still valid by checking whether bus_for_each_drv(dev-
> >bus, 
> ...) can still find the driver pointer? That approach requires 
> protection with a mutex to avoid races with the driver detach code
> but 
> shouldn't require any new flags in struct device.
> 
> Thanks,
> 
> Bart.

That doesn't solve the problem I was pointing out though.

So the issue you are addressing by rechecking the bus should already be
handled by just calling async_synchronize_full in driver_detach. After
all we can't have a driver that is being added to the bus while it is
also being removed. So if we are detaching the driver calling
async_synchronize_full will flush out any deferred attach calls and
there will be no further calls since the driver has already been
removed from the bus.

The issue I was thinking of is how do we deal with races between
device_attach and device_release_driver. In that case we know the
device we want to remove a driver from, but we may not have information
about the driver. The easiest solution is to basically just disable the
pending enable. I could use the approach I am doing now and just NULL
out the driver_data if dev->driver is NULL. The only thing I am
thinking about is if just dev->driver being NULL is enough to signal
that we are using driver_data to carry a pointer to a pending driver,
or if we should add an extra bit to carry that meaning. It would be
pretty easy to just add a bit and then use that to prevent any false
reads of the deferred driver as driver data, or driver data as a
deferred driver as it would essentially act as a type bit.

Thanks.

- Alex


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-10-19 22:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-15 15:09 [driver-core PATCH v4 0/6] Add NUMA aware async_schedule calls Alexander Duyck
2018-10-15 15:09 ` [driver-core PATCH v4 1/6] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
2018-10-15 15:09 ` [driver-core PATCH v4 2/6] async: Add support for queueing on specific " Alexander Duyck
2018-10-15 15:09 ` [driver-core PATCH v4 3/6] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
2018-10-18  7:46   ` Rafael J. Wysocki
2018-10-18 17:53   ` Bart Van Assche
2018-10-15 15:09 ` [driver-core PATCH v4 4/6] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
2018-10-18 18:11   ` Bart Van Assche
2018-10-18 19:38     ` Alexander Duyck
2018-10-18 20:13       ` Bart Van Assche
2018-10-19  2:20         ` Alexander Duyck
2018-10-19  2:31           ` Bart Van Assche
2018-10-19 22:35             ` Alexander Duyck
2018-10-15 15:09 ` [driver-core PATCH v4 5/6] driver core: Attach devices on CPU local to device node Alexander Duyck
2018-10-15 15:09 ` [driver-core PATCH v4 6/6] PM core: Use new async_schedule_dev command Alexander Duyck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).