linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls
@ 2018-12-05 17:25 Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 1/9] driver core: Move async_synchronize_full call Alexander Duyck
                   ` (9 more replies)
  0 siblings, 10 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.

This patch set originally started out focused on just the one call to
async_schedule_domain in the nvdimm tree that was being used to defer the
device_add call however after doing some digging I realized the scope of
this was much broader than I had originally planned. As such I went
through and reworked the underlying infrastructure down to replacing the
queue_work call itself with a function of my own and opted to try and
provide a NUMA aware solution that would work for a broader audience.

In addition I have added several tweaks and/or clean-ups to the front of the
patch set. Patches 1 through 4 address a number of issues that actually were
causing the existing async_schedule calls to not show the performance that
they could due to either not scaling on a per device basis, or due to issues
that could result in a potential deadlock. For example, patch 4 addresses the
fact that we were calling async_schedule once per driver instead of once
per device, and as a result we would have still ended up with devices
being probed on a non-local node without addressing this first.

RFC->v1:
    Dropped nvdimm patch to submit later.
        It relies on code in libnvdimm development tree.
    Simplified queue_work_near to just convert node into a CPU.
    Split up drivers core and PM core patches.
v1->v2:
    Renamed queue_work_near to queue_work_node
    Added WARN_ON_ONCE if we use queue_work_node with per-cpu workqueue
v2->v3:
    Added Acked-by for queue_work_node patch
    Continued rename from _near to _node to be consistent with queue_work_node
        Renamed async_schedule_near_domain to async_schedule_node_domain
        Renamed async_schedule_near to async_schedule_node
    Added kerneldoc for new async_schedule_XXX functions
    Updated patch description for patch 4 to include data on potential gains
v3->v4
    Added patch to consolidate use of need_parent_lock
    Make asynchronous driver probing explicit about use of drvdata
v4->v5
    Added patch to move async_synchronize_full to address deadlock
    Added bit async_probe to act as mutex for probe/remove calls
    Added back nvdimm patch as code it relies on is now in Linus's tree
    Incorporated review comments on parent & device locking consolidation
    Rebased on latest linux-next
v5->v6:
    Drop the "This patch" or "This change" from start of patch descriptions.
    Drop unnecessary parenthesis in first patch
    Use same wording for "selecting a CPU" in comments added in first patch
    Added kernel documentation for async_probe member of device
    Fixed up comments for async_schedule calls in patch 2
    Moved code related setting async driver out of device.h and into dd.c
    Added Reviewed-by for several patches
v6->v7:
    Fixed typo which had kernel doc refer to "lock" when I meant "unlock"
    Dropped "bool X:1" to "u8 X:1" from patch description
    Added async_driver to device_private structure to store driver
    Dropped unecessary code shuffle from async_probe patch
    Reordered patches to move fixes up to front
    Added Reviewed-by for several patches
    Updated cover page and patch descriptions throughout the set
v7->v8:
    Replaced async_probe value with dead, only apply dead in device_del
    Dropped Reviewed-by from patch 2 due to significant changes
    Added Reviewed-by for patches reviewed by Luis Chamberlain

---

Alexander Duyck (9):
      driver core: Move async_synchronize_full call
      driver core: Establish order of operations for device_add and device_del via bitflag
      device core: Consolidate locking and unlocking of parent and device
      driver core: Probe devices asynchronously instead of the driver
      workqueue: Provide queue_work_node to queue work near a given NUMA node
      async: Add support for queueing on specific NUMA node
      driver core: Attach devices on CPU local to device node
      PM core: Use new async_schedule_dev command
      libnvdimm: Schedule device registration on node local to the device


 drivers/base/base.h       |    4 +
 drivers/base/bus.c        |   46 ++------------
 drivers/base/core.c       |   11 +++
 drivers/base/dd.c         |  152 ++++++++++++++++++++++++++++++++++++++-------
 drivers/base/power/main.c |   12 ++--
 drivers/nvdimm/bus.c      |   11 ++-
 include/linux/async.h     |   82 +++++++++++++++++++++++-
 include/linux/device.h    |    5 +
 include/linux/workqueue.h |    2 +
 kernel/async.c            |   53 +++++++++-------
 kernel/workqueue.c        |   84 +++++++++++++++++++++++++
 11 files changed, 362 insertions(+), 100 deletions(-)

--

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 1/9] driver core: Move async_synchronize_full call
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag Alexander Duyck
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Move the async_synchronize_full call out of __device_release_driver and
into driver_detach.

The idea behind this is that the async_synchronize_full call will only
guarantee that any existing async operations are flushed. This doesn't do
anything to guarantee that a hotplug event that may occur while we are
doing the release of the driver will not be asynchronously scheduled.

By moving this into the driver_detach path we can avoid potential deadlocks
as we aren't holding the device lock at this point and we should not have
the driver we want to flush loaded so the flush will take care of any
asynchronous events the driver we are detaching might have scheduled.

Fixes: 765230b5f084 ("driver-core: add asynchronous probing support for drivers")
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/dd.c |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 689ac9dc6d81..88713f182086 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -931,9 +931,6 @@ static void __device_release_driver(struct device *dev, struct device *parent)
 
 	drv = dev->driver;
 	if (drv) {
-		if (driver_allows_async_probing(drv))
-			async_synchronize_full();
-
 		while (device_links_busy(dev)) {
 			device_unlock(dev);
 			if (parent)
@@ -1039,6 +1036,9 @@ void driver_detach(struct device_driver *drv)
 	struct device_private *dev_prv;
 	struct device *dev;
 
+	if (driver_allows_async_probing(drv))
+		async_synchronize_full();
+
 	for (;;) {
 		spin_lock(&drv->p->klist_devices.k_lock);
 		if (list_empty(&drv->p->klist_devices.k_list)) {


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 1/9] driver core: Move async_synchronize_full call Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-10 18:58   ` Dan Williams
  2018-12-05 17:25 ` [driver-core PATCH v8 3/9] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Add an additional bit flag to the device struct named "dead".

This additional flag provides a guarantee that when a device_del is
executed on a given interface an async worker will not attempt to attach
the driver following the earlier device_del call. Previously this
guarantee was not present and could result in the device_del call
attempting to remove a driver from an interface only to have the async
worker attempt to probe the driver later when it finally completes the
asynchronous probe call.

Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/core.c    |   11 +++++++++++
 drivers/base/dd.c      |    8 ++++++--
 include/linux/device.h |    5 +++++
 3 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index f3e6ca4170b4..70358327303b 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
 	struct kobject *glue_dir = NULL;
 	struct class_interface *class_intf;
 
+	/*
+	 * Hold the device lock and set the "dead" flag to guarantee that
+	 * the update behavior is consistent with the other bitfields near
+	 * it and that we cannot have an asynchronous probe routine trying
+	 * to run while we are tearing out the bus/class/sysfs from
+	 * underneath the device.
+	 */
+	device_lock(dev);
+	dev->dead = true;
+	device_unlock(dev);
+
 	/* Notify clients of device removal.  This call must come
 	 * before dpm_sysfs_remove().
 	 */
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 88713f182086..3bb8c3e0f3da 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
 
 	device_lock(dev);
 
+	/* device is or has been removed from the bus, just bail out */
+	if (dev->dead)
+		goto out_unlock;
+
 	if (dev->parent)
 		pm_runtime_get_sync(dev->parent);
 
@@ -784,7 +788,7 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
 
 	if (dev->parent)
 		pm_runtime_put(dev->parent);
-
+out_unlock:
 	device_unlock(dev);
 
 	put_device(dev);
@@ -897,7 +901,7 @@ static int __driver_attach(struct device *dev, void *data)
 	if (dev->parent && dev->bus->need_parent_lock)
 		device_lock(dev->parent);
 	device_lock(dev);
-	if (!dev->driver)
+	if (!dev->dead && !dev->driver)
 		driver_probe_device(drv, dev);
 	device_unlock(dev);
 	if (dev->parent && dev->bus->need_parent_lock)
diff --git a/include/linux/device.h b/include/linux/device.h
index 4921a6192f6b..393704e5b602 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -957,6 +957,10 @@ struct dev_links_info {
  *              device.
  * @dma_coherent: this particular device is dma coherent, even if the
  *		architecture supports non-coherent devices.
+ * @dead:	This device is currently either in the process of or has
+ *		been removed from the system. Any asynchronous events
+ *		scheduled for this device should exit without taking any
+ *		action.
  *
  * At the lowest level, every device in a Linux system is represented by an
  * instance of struct device. The device structure contains the information
@@ -1051,6 +1055,7 @@ struct device {
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	bool			dma_coherent:1;
 #endif
+	bool			dead:1;
 };
 
 static inline struct device *kobj_to_dev(struct kobject *kobj)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 3/9] device core: Consolidate locking and unlocking of parent and device
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 1/9] driver core: Move async_synchronize_full call Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 4/9] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Try to consolidate all of the locking and unlocking of both the parent and
device when attaching or removing a driver from a given device.

To do that I first consolidated the lock pattern into two functions
__device_driver_lock and __device_driver_unlock. After doing that I then
created functions specific to attaching and detaching the driver while
acquiring these locks. By doing this I was able to reduce the number of
spots where we touch need_parent_lock from 12 down to 4.

This patch should produce no functional changes, it is meant to be a code
clean-up/consolidation only.

Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/base.h |    2 +
 drivers/base/bus.c  |   23 ++----------
 drivers/base/dd.c   |   95 ++++++++++++++++++++++++++++++++++++++++-----------
 3 files changed, 81 insertions(+), 39 deletions(-)

diff --git a/drivers/base/base.h b/drivers/base/base.h
index 7a419a7a6235..3f22ebd6117a 100644
--- a/drivers/base/base.h
+++ b/drivers/base/base.h
@@ -124,6 +124,8 @@ extern int driver_add_groups(struct device_driver *drv,
 			     const struct attribute_group **groups);
 extern void driver_remove_groups(struct device_driver *drv,
 				 const struct attribute_group **groups);
+int device_driver_attach(struct device_driver *drv, struct device *dev);
+void device_driver_detach(struct device *dev);
 
 extern char *make_class_name(const char *name, struct kobject *kobj);
 
diff --git a/drivers/base/bus.c b/drivers/base/bus.c
index 8bfd27ec73d6..8a630f9bd880 100644
--- a/drivers/base/bus.c
+++ b/drivers/base/bus.c
@@ -184,11 +184,7 @@ static ssize_t unbind_store(struct device_driver *drv, const char *buf,
 
 	dev = bus_find_device_by_name(bus, NULL, buf);
 	if (dev && dev->driver == drv) {
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_release_driver(dev);
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_unlock(dev->parent);
+		device_driver_detach(dev);
 		err = count;
 	}
 	put_device(dev);
@@ -211,13 +207,7 @@ static ssize_t bind_store(struct device_driver *drv, const char *buf,
 
 	dev = bus_find_device_by_name(bus, NULL, buf);
 	if (dev && dev->driver == NULL && driver_match_device(drv, dev)) {
-		if (dev->parent && bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_lock(dev);
-		err = driver_probe_device(drv, dev);
-		device_unlock(dev);
-		if (dev->parent && bus->need_parent_lock)
-			device_unlock(dev->parent);
+		err = device_driver_attach(drv, dev);
 
 		if (err > 0) {
 			/* success */
@@ -769,13 +759,8 @@ EXPORT_SYMBOL_GPL(bus_rescan_devices);
  */
 int device_reprobe(struct device *dev)
 {
-	if (dev->driver) {
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_lock(dev->parent);
-		device_release_driver(dev);
-		if (dev->parent && dev->bus->need_parent_lock)
-			device_unlock(dev->parent);
-	}
+	if (dev->driver)
+		device_driver_detach(dev);
 	return bus_rescan_devices_helper(dev, NULL);
 }
 EXPORT_SYMBOL_GPL(device_reprobe);
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 3bb8c3e0f3da..e50d768cd3b5 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -871,6 +871,64 @@ void device_initial_probe(struct device *dev)
 	__device_attach(dev, true);
 }
 
+/*
+ * __device_driver_lock - acquire locks needed to manipulate dev->drv
+ * @dev: Device we will update driver info for
+ * @parent: Parent device. Needed if the bus requires parent lock
+ *
+ * This function will take the required locks for manipulating dev->drv.
+ * Normally this will just be the @dev lock, but when called for a USB
+ * interface, @parent lock will be held as well.
+ */
+static void __device_driver_lock(struct device *dev, struct device *parent)
+{
+	if (parent && dev->bus->need_parent_lock)
+		device_lock(parent);
+	device_lock(dev);
+}
+
+/*
+ * __device_driver_unlock - release locks needed to manipulate dev->drv
+ * @dev: Device we will update driver info for
+ * @parent: Parent device. Needed if the bus requires parent lock
+ *
+ * This function will release the required locks for manipulating dev->drv.
+ * Normally this will just be the the @dev lock, but when called for a
+ * USB interface, @parent lock will be released as well.
+ */
+static void __device_driver_unlock(struct device *dev, struct device *parent)
+{
+	device_unlock(dev);
+	if (parent && dev->bus->need_parent_lock)
+		device_unlock(parent);
+}
+
+/**
+ * device_driver_attach - attach a specific driver to a specific device
+ * @drv: Driver to attach
+ * @dev: Device to attach it to
+ *
+ * Manually attach driver to a device. Will acquire both @dev lock and
+ * @dev->parent lock if needed.
+ */
+int device_driver_attach(struct device_driver *drv, struct device *dev)
+{
+	int ret = 0;
+
+	__device_driver_lock(dev, dev->parent);
+
+	/*
+	 * If device has been removed or someone has already successfully
+	 * bound a driver before us just skip the driver probe call.
+	 */
+	if (!dev->dead && !dev->driver)
+		ret = driver_probe_device(drv, dev);
+
+	__device_driver_unlock(dev, dev->parent);
+
+	return ret;
+}
+
 static int __driver_attach(struct device *dev, void *data)
 {
 	struct device_driver *drv = data;
@@ -898,14 +956,7 @@ static int __driver_attach(struct device *dev, void *data)
 		return ret;
 	} /* ret > 0 means positive match */
 
-	if (dev->parent && dev->bus->need_parent_lock)
-		device_lock(dev->parent);
-	device_lock(dev);
-	if (!dev->dead && !dev->driver)
-		driver_probe_device(drv, dev);
-	device_unlock(dev);
-	if (dev->parent && dev->bus->need_parent_lock)
-		device_unlock(dev->parent);
+	device_driver_attach(drv, dev);
 
 	return 0;
 }
@@ -936,15 +987,11 @@ static void __device_release_driver(struct device *dev, struct device *parent)
 	drv = dev->driver;
 	if (drv) {
 		while (device_links_busy(dev)) {
-			device_unlock(dev);
-			if (parent)
-				device_unlock(parent);
+			__device_driver_unlock(dev, parent);
 
 			device_links_unbind_consumers(dev);
-			if (parent)
-				device_lock(parent);
 
-			device_lock(dev);
+			__device_driver_lock(dev, parent);
 			/*
 			 * A concurrent invocation of the same function might
 			 * have released the driver successfully while this one
@@ -997,16 +1044,12 @@ void device_release_driver_internal(struct device *dev,
 				    struct device_driver *drv,
 				    struct device *parent)
 {
-	if (parent && dev->bus->need_parent_lock)
-		device_lock(parent);
+	__device_driver_lock(dev, parent);
 
-	device_lock(dev);
 	if (!drv || drv == dev->driver)
 		__device_release_driver(dev, parent);
 
-	device_unlock(dev);
-	if (parent && dev->bus->need_parent_lock)
-		device_unlock(parent);
+	__device_driver_unlock(dev, parent);
 }
 
 /**
@@ -1031,6 +1074,18 @@ void device_release_driver(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(device_release_driver);
 
+/**
+ * device_driver_detach - detach driver from a specific device
+ * @dev: device to detach driver from
+ *
+ * Detach driver from device. Will acquire both @dev lock and @dev->parent
+ * lock if needed.
+ */
+void device_driver_detach(struct device *dev)
+{
+	device_release_driver_internal(dev, NULL, dev->parent);
+}
+
 /**
  * driver_detach - detach driver from all devices it controls.
  * @drv: driver.


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 4/9] driver core: Probe devices asynchronously instead of the driver
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (2 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 3/9] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 5/9] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Probe devices asynchronously instead of the driver. This results in us
seeing the same behavior if the device is registered before the driver or
after. This way we can avoid serializing the initialization should the
driver not be loaded until after the devices have already been added.

The motivation behind this is that if we have a set of devices that
take a significant amount of time to load we can greatly reduce the time to
load by processing them in parallel instead of one at a time. In addition,
each device can exist on a different node so placing a single thread on one
CPU to initialize all of the devices for a given driver can result in poor
performance on a system with multiple nodes.

This approach can reduce the time needed to scan SCSI LUNs significantly.
The only way to realize that speedup is by enabling more concurrency which
is what is achieved with this patch.

To achieve this it was necessary to add a new member "async_driver" to the
device_private structure to store the driver pointer while we wait on the
deferred probe call.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/base.h |    2 ++
 drivers/base/bus.c  |   23 +++--------------------
 drivers/base/dd.c   |   43 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 48 insertions(+), 20 deletions(-)

diff --git a/drivers/base/base.h b/drivers/base/base.h
index 3f22ebd6117a..c95384a8e53c 100644
--- a/drivers/base/base.h
+++ b/drivers/base/base.h
@@ -64,6 +64,7 @@ struct driver_private {
  *	binding of drivers which were unable to get all the resources needed by
  *	the device; typically because it depends on another driver getting
  *	probed first.
+ * @async_driver - pointer to device driver awaiting probe via async_probe
  * @device - pointer back to the struct device that this structure is
  * associated with.
  *
@@ -75,6 +76,7 @@ struct device_private {
 	struct klist_node knode_driver;
 	struct klist_node knode_bus;
 	struct list_head deferred_probe;
+	struct device_driver *async_driver;
 	struct device *device;
 };
 #define to_device_private_parent(obj)	\
diff --git a/drivers/base/bus.c b/drivers/base/bus.c
index 8a630f9bd880..0cd2eadd0816 100644
--- a/drivers/base/bus.c
+++ b/drivers/base/bus.c
@@ -606,17 +606,6 @@ static ssize_t uevent_store(struct device_driver *drv, const char *buf,
 }
 static DRIVER_ATTR_WO(uevent);
 
-static void driver_attach_async(void *_drv, async_cookie_t cookie)
-{
-	struct device_driver *drv = _drv;
-	int ret;
-
-	ret = driver_attach(drv);
-
-	pr_debug("bus: '%s': driver %s async attach completed: %d\n",
-		 drv->bus->name, drv->name, ret);
-}
-
 /**
  * bus_add_driver - Add a driver to the bus.
  * @drv: driver.
@@ -649,15 +638,9 @@ int bus_add_driver(struct device_driver *drv)
 
 	klist_add_tail(&priv->knode_bus, &bus->p->klist_drivers);
 	if (drv->bus->p->drivers_autoprobe) {
-		if (driver_allows_async_probing(drv)) {
-			pr_debug("bus: '%s': probing driver %s asynchronously\n",
-				drv->bus->name, drv->name);
-			async_schedule(driver_attach_async, drv);
-		} else {
-			error = driver_attach(drv);
-			if (error)
-				goto out_unregister;
-		}
+		error = driver_attach(drv);
+		if (error)
+			goto out_unregister;
 	}
 	module_add_driver(drv->owner, drv);
 
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index e50d768cd3b5..b731741059cb 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -929,6 +929,30 @@ int device_driver_attach(struct device_driver *drv, struct device *dev)
 	return ret;
 }
 
+static void __driver_attach_async_helper(void *_dev, async_cookie_t cookie)
+{
+	struct device *dev = _dev;
+	struct device_driver *drv;
+	int ret = 0;
+
+	__device_driver_lock(dev, dev->parent);
+
+	drv = dev->p->async_driver;
+
+	/*
+	 * If device has been removed or someone has already successfully
+	 * bound a driver before us just skip the driver probe call.
+	 */
+	if (!dev->dead && !dev->driver)
+		ret = driver_probe_device(drv, dev);
+
+	__device_driver_unlock(dev, dev->parent);
+
+	dev_dbg(dev, "driver %s async attach completed: %d\n", drv->name, ret);
+
+	put_device(dev);
+}
+
 static int __driver_attach(struct device *dev, void *data)
 {
 	struct device_driver *drv = data;
@@ -956,6 +980,25 @@ static int __driver_attach(struct device *dev, void *data)
 		return ret;
 	} /* ret > 0 means positive match */
 
+	if (driver_allows_async_probing(drv)) {
+		/*
+		 * Instead of probing the device synchronously we will
+		 * probe it asynchronously to allow for more parallelism.
+		 *
+		 * We only take the device lock here in order to guarantee
+		 * that the dev->driver and async_driver fields are protected
+		 */
+		dev_dbg(dev, "probing driver %s asynchronously\n", drv->name);
+		device_lock(dev);
+		if (!dev->driver) {
+			get_device(dev);
+			dev->p->async_driver = drv;
+			async_schedule(__driver_attach_async_helper, dev);
+		}
+		device_unlock(dev);
+		return 0;
+	}
+
 	device_driver_attach(drv, dev);
 
 	return 0;


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 5/9] workqueue: Provide queue_work_node to queue work near a given NUMA node
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (3 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 4/9] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 6/9] async: Add support for queueing on specific " Alexander Duyck
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Provide a new function, queue_work_node, which is meant to schedule work on
a "random" CPU of the requested NUMA node. The main motivation for this is
to help assist asynchronous init to better improve boot times for devices
that are local to a specific node.

For now we just default to the first CPU that is in the intersection of the
cpumask of the node and the online cpumask. The only exception is if the
CPU is local to the node we will just use the current CPU. This should work
for our purposes as we are currently only using this for unbound work so
the CPU will be translated to a node anyway instead of being directly used.

As we are only using the first CPU to represent the NUMA node for now I am
limiting the scope of the function so that it can only be used with unbound
workqueues.

Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 include/linux/workqueue.h |    2 +
 kernel/workqueue.c        |   84 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 86 insertions(+)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 60d673e15632..1f50c1e586e7 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -463,6 +463,8 @@ int workqueue_set_unbound_cpumask(cpumask_var_t cpumask);
 
 extern bool queue_work_on(int cpu, struct workqueue_struct *wq,
 			struct work_struct *work);
+extern bool queue_work_node(int node, struct workqueue_struct *wq,
+			    struct work_struct *work);
 extern bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
 			struct delayed_work *work, unsigned long delay);
 extern bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 392be4b252f6..d5a26e456f7a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1492,6 +1492,90 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
 }
 EXPORT_SYMBOL(queue_work_on);
 
+/**
+ * workqueue_select_cpu_near - Select a CPU based on NUMA node
+ * @node: NUMA node ID that we want to select a CPU from
+ *
+ * This function will attempt to find a "random" cpu available on a given
+ * node. If there are no CPUs available on the given node it will return
+ * WORK_CPU_UNBOUND indicating that we should just schedule to any
+ * available CPU if we need to schedule this work.
+ */
+static int workqueue_select_cpu_near(int node)
+{
+	int cpu;
+
+	/* No point in doing this if NUMA isn't enabled for workqueues */
+	if (!wq_numa_enabled)
+		return WORK_CPU_UNBOUND;
+
+	/* Delay binding to CPU if node is not valid or online */
+	if (node < 0 || node >= MAX_NUMNODES || !node_online(node))
+		return WORK_CPU_UNBOUND;
+
+	/* Use local node/cpu if we are already there */
+	cpu = raw_smp_processor_id();
+	if (node == cpu_to_node(cpu))
+		return cpu;
+
+	/* Use "random" otherwise know as "first" online CPU of node */
+	cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask);
+
+	/* If CPU is valid return that, otherwise just defer */
+	return cpu < nr_cpu_ids ? cpu : WORK_CPU_UNBOUND;
+}
+
+/**
+ * queue_work_node - queue work on a "random" cpu for a given NUMA node
+ * @node: NUMA node that we are targeting the work for
+ * @wq: workqueue to use
+ * @work: work to queue
+ *
+ * We queue the work to a "random" CPU within a given NUMA node. The basic
+ * idea here is to provide a way to somehow associate work with a given
+ * NUMA node.
+ *
+ * This function will only make a best effort attempt at getting this onto
+ * the right NUMA node. If no node is requested or the requested node is
+ * offline then we just fall back to standard queue_work behavior.
+ *
+ * Currently the "random" CPU ends up being the first available CPU in the
+ * intersection of cpu_online_mask and the cpumask of the node, unless we
+ * are running on the node. In that case we just use the current CPU.
+ *
+ * Return: %false if @work was already on a queue, %true otherwise.
+ */
+bool queue_work_node(int node, struct workqueue_struct *wq,
+		     struct work_struct *work)
+{
+	unsigned long flags;
+	bool ret = false;
+
+	/*
+	 * This current implementation is specific to unbound workqueues.
+	 * Specifically we only return the first available CPU for a given
+	 * node instead of cycling through individual CPUs within the node.
+	 *
+	 * If this is used with a per-cpu workqueue then the logic in
+	 * workqueue_select_cpu_near would need to be updated to allow for
+	 * some round robin type logic.
+	 */
+	WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND));
+
+	local_irq_save(flags);
+
+	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
+		int cpu = workqueue_select_cpu_near(node);
+
+		__queue_work(cpu, wq, work);
+		ret = true;
+	}
+
+	local_irq_restore(flags);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(queue_work_node);
+
 void delayed_work_timer_fn(struct timer_list *t)
 {
 	struct delayed_work *dwork = from_timer(dwork, t, timer);


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 6/9] async: Add support for queueing on specific NUMA node
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (4 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 5/9] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 7/9] driver core: Attach devices on CPU local to device node Alexander Duyck
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Introduce four new variants of the async_schedule_ functions that allow
scheduling on a specific NUMA node.

The first two functions are async_schedule_near and
async_schedule_near_domain end up mapping to async_schedule and
async_schedule_domain, but provide NUMA node specific functionality. They
replace the original functions which were moved to inline function
definitions that call the new functions while passing NUMA_NO_NODE.

The second two functions are async_schedule_dev and
async_schedule_dev_domain which provide NUMA specific functionality when
passing a device as the data member and that device has a NUMA node other
than NUMA_NO_NODE.

The main motivation behind this is to address the need to be able to
schedule device specific init work on specific NUMA nodes in order to
improve performance of memory initialization.

I have seen a significant improvement in initialziation time for persistent
memory as a result of this approach. In the case of 3TB of memory on a
single node the initialization time in the worst case went from 36s down to
about 26s for a 10s improvement. As such the data shows a general benefit
for affinitizing the async work to the node local to the device.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 include/linux/async.h |   82 +++++++++++++++++++++++++++++++++++++++++++++++--
 kernel/async.c        |   53 +++++++++++++++++---------------
 2 files changed, 108 insertions(+), 27 deletions(-)

diff --git a/include/linux/async.h b/include/linux/async.h
index 6b0226bdaadc..f81d6dbffe68 100644
--- a/include/linux/async.h
+++ b/include/linux/async.h
@@ -14,6 +14,8 @@
 
 #include <linux/types.h>
 #include <linux/list.h>
+#include <linux/numa.h>
+#include <linux/device.h>
 
 typedef u64 async_cookie_t;
 typedef void (*async_func_t) (void *data, async_cookie_t cookie);
@@ -37,9 +39,83 @@ struct async_domain {
 	struct async_domain _name = { .pending = LIST_HEAD_INIT(_name.pending), \
 				      .registered = 0 }
 
-extern async_cookie_t async_schedule(async_func_t func, void *data);
-extern async_cookie_t async_schedule_domain(async_func_t func, void *data,
-					    struct async_domain *domain);
+async_cookie_t async_schedule_node(async_func_t func, void *data,
+				   int node);
+async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+					  int node,
+					  struct async_domain *domain);
+
+/**
+ * async_schedule - schedule a function for asynchronous execution
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t async_schedule(async_func_t func, void *data)
+{
+	return async_schedule_node(func, data, NUMA_NO_NODE);
+}
+
+/**
+ * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_domain(async_func_t func, void *data,
+		      struct async_domain *domain)
+{
+	return async_schedule_node_domain(func, data, NUMA_NO_NODE, domain);
+}
+
+/**
+ * async_schedule_dev - A device specific version of async_schedule
+ * @func: function to execute asynchronously
+ * @dev: device argument to be passed to function
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @dev is used as both the argument for the function and to provide NUMA
+ * context for where to run the function. By doing this we can try to
+ * provide for the best possible outcome by operating on the device on the
+ * CPUs closest to the device.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_dev(async_func_t func, struct device *dev)
+{
+	return async_schedule_node(func, dev, dev_to_node(dev));
+}
+
+/**
+ * async_schedule_dev_domain - A device specific version of async_schedule_domain
+ * @func: function to execute asynchronously
+ * @dev: device argument to be passed to function
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @dev is used as both the argument for the function and to provide NUMA
+ * context for where to run the function. By doing this we can try to
+ * provide for the best possible outcome by operating on the device on the
+ * CPUs closest to the device.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.
+ * Note: This function may be called from atomic or non-atomic contexts.
+ */
+static inline async_cookie_t
+async_schedule_dev_domain(async_func_t func, struct device *dev,
+			  struct async_domain *domain)
+{
+	return async_schedule_node_domain(func, dev, dev_to_node(dev), domain);
+}
+
 void async_unregister_domain(struct async_domain *domain);
 extern void async_synchronize_full(void);
 extern void async_synchronize_full_domain(struct async_domain *domain);
diff --git a/kernel/async.c b/kernel/async.c
index 4932e9193fa3..59ce133eef11 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -148,7 +148,25 @@ static void async_run_entry_fn(struct work_struct *work)
 	wake_up(&async_done);
 }
 
-static async_cookie_t __async_schedule(async_func_t func, void *data, struct async_domain *domain)
+/**
+ * async_schedule_node_domain - NUMA specific version of async_schedule_domain
+ * @func: function to execute asynchronously
+ * @data: data pointer to pass to the function
+ * @node: NUMA node that we want to schedule this on or close to
+ * @domain: the domain
+ *
+ * Returns an async_cookie_t that may be used for checkpointing later.
+ * @domain may be used in the async_synchronize_*_domain() functions to
+ * wait within a certain synchronization domain rather than globally.
+ *
+ * Note: This function may be called from atomic or non-atomic contexts.
+ *
+ * The node requested will be honored on a best effort basis. If the node
+ * has no CPUs associated with it then the work is distributed among all
+ * available CPUs.
+ */
+async_cookie_t async_schedule_node_domain(async_func_t func, void *data,
+					  int node, struct async_domain *domain)
 {
 	struct async_entry *entry;
 	unsigned long flags;
@@ -194,43 +212,30 @@ static async_cookie_t __async_schedule(async_func_t func, void *data, struct asy
 	current->flags |= PF_USED_ASYNC;
 
 	/* schedule for execution */
-	queue_work(system_unbound_wq, &entry->work);
+	queue_work_node(node, system_unbound_wq, &entry->work);
 
 	return newcookie;
 }
+EXPORT_SYMBOL_GPL(async_schedule_node_domain);
 
 /**
- * async_schedule - schedule a function for asynchronous execution
+ * async_schedule_node - NUMA specific version of async_schedule
  * @func: function to execute asynchronously
  * @data: data pointer to pass to the function
+ * @node: NUMA node that we want to schedule this on or close to
  *
  * Returns an async_cookie_t that may be used for checkpointing later.
  * Note: This function may be called from atomic or non-atomic contexts.
- */
-async_cookie_t async_schedule(async_func_t func, void *data)
-{
-	return __async_schedule(func, data, &async_dfl_domain);
-}
-EXPORT_SYMBOL_GPL(async_schedule);
-
-/**
- * async_schedule_domain - schedule a function for asynchronous execution within a certain domain
- * @func: function to execute asynchronously
- * @data: data pointer to pass to the function
- * @domain: the domain
  *
- * Returns an async_cookie_t that may be used for checkpointing later.
- * @domain may be used in the async_synchronize_*_domain() functions to
- * wait within a certain synchronization domain rather than globally.  A
- * synchronization domain is specified via @domain.  Note: This function
- * may be called from atomic or non-atomic contexts.
+ * The node requested will be honored on a best effort basis. If the node
+ * has no CPUs associated with it then the work is distributed among all
+ * available CPUs.
  */
-async_cookie_t async_schedule_domain(async_func_t func, void *data,
-				     struct async_domain *domain)
+async_cookie_t async_schedule_node(async_func_t func, void *data, int node)
 {
-	return __async_schedule(func, data, domain);
+	return async_schedule_node_domain(func, data, node, &async_dfl_domain);
 }
-EXPORT_SYMBOL_GPL(async_schedule_domain);
+EXPORT_SYMBOL_GPL(async_schedule_node);
 
 /**
  * async_synchronize_full - synchronize all asynchronous function calls


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 7/9] driver core: Attach devices on CPU local to device node
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (5 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 6/9] async: Add support for queueing on specific " Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:25 ` [driver-core PATCH v8 8/9] PM core: Use new async_schedule_dev command Alexander Duyck
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Call the asynchronous probe routines on a CPU local to the device node. By
doing this we should be able to improve our initialization time
significantly as we can avoid having to access the device from a remote
node which may introduce higher latency.

For example, in the case of initializing memory for NVDIMM this can have a
significant impact as initialing 3TB on remote node can take up to 39
seconds while initialing it on a local node only takes 23 seconds. It is
situations like this where we will see the biggest improvement.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/dd.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index b731741059cb..49096adf96a1 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -833,7 +833,7 @@ static int __device_attach(struct device *dev, bool allow_async)
 			 */
 			dev_dbg(dev, "scheduling asynchronous probe\n");
 			get_device(dev);
-			async_schedule(__device_attach_async_helper, dev);
+			async_schedule_dev(__device_attach_async_helper, dev);
 		} else {
 			pm_request_idle(dev);
 		}
@@ -993,7 +993,7 @@ static int __driver_attach(struct device *dev, void *data)
 		if (!dev->driver) {
 			get_device(dev);
 			dev->p->async_driver = drv;
-			async_schedule(__driver_attach_async_helper, dev);
+			async_schedule_dev(__driver_attach_async_helper, dev);
 		}
 		device_unlock(dev);
 		return 0;


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 8/9] PM core: Use new async_schedule_dev command
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (6 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 7/9] driver core: Attach devices on CPU local to device node Alexander Duyck
@ 2018-12-05 17:25 ` Alexander Duyck
  2018-12-05 17:26 ` [driver-core PATCH v8 9/9] libnvdimm: Schedule device registration on node local to the device Alexander Duyck
  2018-12-10 19:22 ` [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Luis Chamberlain
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:25 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Use the device specific version of the async_schedule commands to defer
various tasks related to power management. By doing this we should see a
slight improvement in performance as any device that is sensitive to
latency/locality in the setup will now be initializing on the node closest
to the device.

Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/base/power/main.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index a690fd400260..ebb8b61b52e9 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -726,7 +726,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_noirq, dev);
+			async_schedule_dev(async_resume_noirq, dev);
 		}
 	}
 
@@ -883,7 +883,7 @@ void dpm_resume_early(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume_early, dev);
+			async_schedule_dev(async_resume_early, dev);
 		}
 	}
 
@@ -1047,7 +1047,7 @@ void dpm_resume(pm_message_t state)
 		reinit_completion(&dev->power.completion);
 		if (is_async(dev)) {
 			get_device(dev);
-			async_schedule(async_resume, dev);
+			async_schedule_dev(async_resume, dev);
 		}
 	}
 
@@ -1366,7 +1366,7 @@ static int device_suspend_noirq(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_noirq, dev);
+		async_schedule_dev(async_suspend_noirq, dev);
 		return 0;
 	}
 	return __device_suspend_noirq(dev, pm_transition, false);
@@ -1569,7 +1569,7 @@ static int device_suspend_late(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend_late, dev);
+		async_schedule_dev(async_suspend_late, dev);
 		return 0;
 	}
 
@@ -1833,7 +1833,7 @@ static int device_suspend(struct device *dev)
 
 	if (is_async(dev)) {
 		get_device(dev);
-		async_schedule(async_suspend, dev);
+		async_schedule_dev(async_suspend, dev);
 		return 0;
 	}
 


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [driver-core PATCH v8 9/9] libnvdimm: Schedule device registration on node local to the device
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (7 preceding siblings ...)
  2018-12-05 17:25 ` [driver-core PATCH v8 8/9] PM core: Use new async_schedule_dev command Alexander Duyck
@ 2018-12-05 17:26 ` Alexander Duyck
  2018-12-10 19:22 ` [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Luis Chamberlain
  9 siblings, 0 replies; 21+ messages in thread
From: Alexander Duyck @ 2018-12-05 17:26 UTC (permalink / raw)
  To: linux-kernel, gregkh
  Cc: mcgrof, linux-nvdimm, tj, akpm, linux-pm, jiangshanlai, rafael,
	len.brown, pavel, zwisler, dan.j.williams, dave.jiang,
	bvanassche, alexander.h.duyck

Force the device registration for nvdimm devices to be closer to the actual
device. This is achieved by using either the NUMA node ID of the region, or
of the parent. By doing this we can have everything above the region based
on the region, and everything below the region based on the nvdimm bus.

By guaranteeing NUMA locality I see an improvement of as high as 25% for
per-node init of a system with 12TB of persistent memory.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
---
 drivers/nvdimm/bus.c |   11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
index f1fb39921236..b1e193541874 100644
--- a/drivers/nvdimm/bus.c
+++ b/drivers/nvdimm/bus.c
@@ -23,6 +23,7 @@
 #include <linux/ndctl.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/cpu.h>
 #include <linux/fs.h>
 #include <linux/io.h>
 #include <linux/mm.h>
@@ -513,11 +514,15 @@ void __nd_device_register(struct device *dev)
 		set_dev_node(dev, to_nd_region(dev)->numa_node);
 
 	dev->bus = &nvdimm_bus_type;
-	if (dev->parent)
+	if (dev->parent) {
 		get_device(dev->parent);
+		if (dev_to_node(dev) == NUMA_NO_NODE)
+			set_dev_node(dev, dev_to_node(dev->parent));
+	}
 	get_device(dev);
-	async_schedule_domain(nd_async_device_register, dev,
-			&nd_async_domain);
+
+	async_schedule_dev_domain(nd_async_device_register, dev,
+				  &nd_async_domain);
 }
 
 void nd_device_register(struct device *dev)


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-05 17:25 ` [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag Alexander Duyck
@ 2018-12-10 18:58   ` Dan Williams
  2018-12-10 19:35     ` Alexander Duyck
  0 siblings, 1 reply; 21+ messages in thread
From: Dan Williams @ 2018-12-10 18:58 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Wed, Dec 5, 2018 at 9:25 AM Alexander Duyck
<alexander.h.duyck@linux.intel.com> wrote:
>
> Add an additional bit flag to the device struct named "dead".
>
> This additional flag provides a guarantee that when a device_del is
> executed on a given interface an async worker will not attempt to attach
> the driver following the earlier device_del call. Previously this
> guarantee was not present and could result in the device_del call
> attempting to remove a driver from an interface only to have the async
> worker attempt to probe the driver later when it finally completes the
> asynchronous probe call.
>
> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> ---
>  drivers/base/core.c    |   11 +++++++++++
>  drivers/base/dd.c      |    8 ++++++--
>  include/linux/device.h |    5 +++++
>  3 files changed, 22 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index f3e6ca4170b4..70358327303b 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
>         struct kobject *glue_dir = NULL;
>         struct class_interface *class_intf;
>
> +       /*
> +        * Hold the device lock and set the "dead" flag to guarantee that
> +        * the update behavior is consistent with the other bitfields near
> +        * it and that we cannot have an asynchronous probe routine trying
> +        * to run while we are tearing out the bus/class/sysfs from
> +        * underneath the device.
> +        */
> +       device_lock(dev);
> +       dev->dead = true;
> +       device_unlock(dev);
> +
>         /* Notify clients of device removal.  This call must come
>          * before dpm_sysfs_remove().
>          */
> diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> index 88713f182086..3bb8c3e0f3da 100644
> --- a/drivers/base/dd.c
> +++ b/drivers/base/dd.c
> @@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
>
>         device_lock(dev);
>
> +       /* device is or has been removed from the bus, just bail out */
> +       if (dev->dead)
> +               goto out_unlock;
> +

What do you think about moving this check into
__device_attach_driver() alongside all the other checks? That way we
also get ->dead checking through the __device_attach() path.

...and after that maybe it could be made a common helper
(dev_driver_checks()?) shared between __device_attach_driver() and
__driver_attach() to reduce some duplication.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls
  2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
                   ` (8 preceding siblings ...)
  2018-12-05 17:26 ` [driver-core PATCH v8 9/9] libnvdimm: Schedule device registration on node local to the device Alexander Duyck
@ 2018-12-10 19:22 ` Luis Chamberlain
  2018-12-10 23:25   ` Alexander Duyck
  9 siblings, 1 reply; 21+ messages in thread
From: Luis Chamberlain @ 2018-12-10 19:22 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: linux-kernel, gregkh, linux-nvdimm, tj, akpm, linux-pm,
	jiangshanlai, rafael, len.brown, pavel, zwisler, dan.j.williams,
	dave.jiang, bvanassche

On Wed, Dec 05, 2018 at 09:25:13AM -0800, Alexander Duyck wrote:
> This patch set provides functionality that will help to improve the
> locality of the async_schedule calls used to provide deferred
> initialization.
> 
> This patch set originally started out focused on just the one call to
> async_schedule_domain in the nvdimm tree that was being used to defer the
> device_add call however after doing some digging I realized the scope of
> this was much broader than I had originally planned. As such I went
> through and reworked the underlying infrastructure down to replacing the
> queue_work call itself with a function of my own and opted to try and
> provide a NUMA aware solution that would work for a broader audience.
> 
> In addition I have added several tweaks and/or clean-ups to the front of the
> patch set. Patches 1 through 4 address a number of issues that actually were
> causing the existing async_schedule calls to not show the performance that
> they could due to either not scaling on a per device basis, or due to issues
> that could result in a potential deadlock. For example, patch 4 addresses the
> fact that we were calling async_schedule once per driver instead of once
> per device, and as a result we would have still ended up with devices
> being probed on a non-local node without addressing this first.

No tests were added. Again, I think it would be good to add test
cases to showcase the old mechanisms, illustrate the new, and ensure
we don't regress both now and also help us ensure we don't regress
moving forward.

This is all too critical of a path for the kernel, and these changes
are rather instrusive. I'd readlly like to see test code for it now
rather than later.

  Luis

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 18:58   ` Dan Williams
@ 2018-12-10 19:35     ` Alexander Duyck
  2018-12-10 19:43       ` Dan Williams
  0 siblings, 1 reply; 21+ messages in thread
From: Alexander Duyck @ 2018-12-10 19:35 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, 2018-12-10 at 10:58 -0800, Dan Williams wrote:
> On Wed, Dec 5, 2018 at 9:25 AM Alexander Duyck
> <alexander.h.duyck@linux.intel.com> wrote:
> > 
> > Add an additional bit flag to the device struct named "dead".
> > 
> > This additional flag provides a guarantee that when a device_del is
> > executed on a given interface an async worker will not attempt to attach
> > the driver following the earlier device_del call. Previously this
> > guarantee was not present and could result in the device_del call
> > attempting to remove a driver from an interface only to have the async
> > worker attempt to probe the driver later when it finally completes the
> > asynchronous probe call.
> > 
> > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > ---
> >  drivers/base/core.c    |   11 +++++++++++
> >  drivers/base/dd.c      |    8 ++++++--
> >  include/linux/device.h |    5 +++++
> >  3 files changed, 22 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > index f3e6ca4170b4..70358327303b 100644
> > --- a/drivers/base/core.c
> > +++ b/drivers/base/core.c
> > @@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
> >         struct kobject *glue_dir = NULL;
> >         struct class_interface *class_intf;
> > 
> > +       /*
> > +        * Hold the device lock and set the "dead" flag to guarantee that
> > +        * the update behavior is consistent with the other bitfields near
> > +        * it and that we cannot have an asynchronous probe routine trying
> > +        * to run while we are tearing out the bus/class/sysfs from
> > +        * underneath the device.
> > +        */
> > +       device_lock(dev);
> > +       dev->dead = true;
> > +       device_unlock(dev);
> > +
> >         /* Notify clients of device removal.  This call must come
> >          * before dpm_sysfs_remove().
> >          */
> > diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> > index 88713f182086..3bb8c3e0f3da 100644
> > --- a/drivers/base/dd.c
> > +++ b/drivers/base/dd.c
> > @@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
> > 
> >         device_lock(dev);
> > 
> > +       /* device is or has been removed from the bus, just bail out */
> > +       if (dev->dead)
> > +               goto out_unlock;
> > +
> 
> What do you think about moving this check into
> __device_attach_driver() alongside all the other checks? That way we
> also get ->dead checking through the __device_attach() path.

I'm not really sure that is the best spot to do that. Part of the
reason being that by placing it where I did we avoid messing with the
runtime power management for the parent if it was already powered off.

If anything I would say we could probably look at pulling the check out
and placing the driver check in __device_attach_async_helper since from
what I can tell the check is actually redundant in the non-async path
anyway since __device_attach already had taken the device lock and
checked dev->driver prior to calling __device_attach_driver.

> ...and after that maybe it could be made a common helper
> (dev_driver_checks()?) shared between __device_attach_driver() and
> __driver_attach() to reduce some duplication.

I'm not sure consolidating it into a function would really be worth the
extra effort. It would essentially just obfuscate the checks and I am
not sure you really save much with:
	if (dev_driver_checks(dev))
vs:
	if (!dev->dead && !dev->driver)

By the time you create the function and replace the few spots that are
making these checks you would end up most likely adding more complexity
to the kernel rather than reducing it any.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 19:35     ` Alexander Duyck
@ 2018-12-10 19:43       ` Dan Williams
  2018-12-10 20:57         ` Alexander Duyck
  0 siblings, 1 reply; 21+ messages in thread
From: Dan Williams @ 2018-12-10 19:43 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, Dec 10, 2018 at 11:35 AM Alexander Duyck
<alexander.h.duyck@linux.intel.com> wrote:
>
> On Mon, 2018-12-10 at 10:58 -0800, Dan Williams wrote:
> > On Wed, Dec 5, 2018 at 9:25 AM Alexander Duyck
> > <alexander.h.duyck@linux.intel.com> wrote:
> > >
> > > Add an additional bit flag to the device struct named "dead".
> > >
> > > This additional flag provides a guarantee that when a device_del is
> > > executed on a given interface an async worker will not attempt to attach
> > > the driver following the earlier device_del call. Previously this
> > > guarantee was not present and could result in the device_del call
> > > attempting to remove a driver from an interface only to have the async
> > > worker attempt to probe the driver later when it finally completes the
> > > asynchronous probe call.
> > >
> > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > > ---
> > >  drivers/base/core.c    |   11 +++++++++++
> > >  drivers/base/dd.c      |    8 ++++++--
> > >  include/linux/device.h |    5 +++++
> > >  3 files changed, 22 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > > index f3e6ca4170b4..70358327303b 100644
> > > --- a/drivers/base/core.c
> > > +++ b/drivers/base/core.c
> > > @@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
> > >         struct kobject *glue_dir = NULL;
> > >         struct class_interface *class_intf;
> > >
> > > +       /*
> > > +        * Hold the device lock and set the "dead" flag to guarantee that
> > > +        * the update behavior is consistent with the other bitfields near
> > > +        * it and that we cannot have an asynchronous probe routine trying
> > > +        * to run while we are tearing out the bus/class/sysfs from
> > > +        * underneath the device.
> > > +        */
> > > +       device_lock(dev);
> > > +       dev->dead = true;
> > > +       device_unlock(dev);
> > > +
> > >         /* Notify clients of device removal.  This call must come
> > >          * before dpm_sysfs_remove().
> > >          */
> > > diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> > > index 88713f182086..3bb8c3e0f3da 100644
> > > --- a/drivers/base/dd.c
> > > +++ b/drivers/base/dd.c
> > > @@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
> > >
> > >         device_lock(dev);
> > >
> > > +       /* device is or has been removed from the bus, just bail out */
> > > +       if (dev->dead)
> > > +               goto out_unlock;
> > > +
> >
> > What do you think about moving this check into
> > __device_attach_driver() alongside all the other checks? That way we
> > also get ->dead checking through the __device_attach() path.
>
> I'm not really sure that is the best spot to do that. Part of the
> reason being that by placing it where I did we avoid messing with the
> runtime power management for the parent if it was already powered off.

...but this already a rare event and the parent shouldn't otherwise be
bothered by a spurious pm_runtime wakeup event.

> If anything I would say we could probably look at pulling the check out
> and placing the driver check in __device_attach_async_helper since from
> what I can tell the check is actually redundant in the non-async path
> anyway since __device_attach already had taken the device lock and
> checked dev->driver prior to calling __device_attach_driver.
>
> > ...and after that maybe it could be made a common helper
> > (dev_driver_checks()?) shared between __device_attach_driver() and
> > __driver_attach() to reduce some duplication.
>
> I'm not sure consolidating it into a function would really be worth the
> extra effort. It would essentially just obfuscate the checks and I am
> not sure you really save much with:
>         if (dev_driver_checks(dev))
> vs:
>         if (!dev->dead && !dev->driver)
>
> By the time you create the function and replace the few spots that are
> making these checks you would end up most likely adding more complexity
> to the kernel rather than reducing it any.

No, I was talking about removing this duplication in
__device_attach_driver() and __driver_attach():

        if (ret == 0) {
                /* no match */
                return 0;
        } else if (ret == -EPROBE_DEFER) {
                dev_dbg(dev, "Device match requests probe deferral\n");
                driver_deferred_probe_add(dev);
        } else if (ret < 0) {
                dev_dbg(dev, "Bus failed to match device: %d", ret);
                return ret;
        } /* ret > 0 means positive match */

...and lead in with a dev->dead check.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 19:43       ` Dan Williams
@ 2018-12-10 20:57         ` Alexander Duyck
  2018-12-10 21:15           ` Dan Williams
  0 siblings, 1 reply; 21+ messages in thread
From: Alexander Duyck @ 2018-12-10 20:57 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, 2018-12-10 at 11:43 -0800, Dan Williams wrote:
> On Mon, Dec 10, 2018 at 11:35 AM Alexander Duyck
> <alexander.h.duyck@linux.intel.com> wrote:
> > 
> > On Mon, 2018-12-10 at 10:58 -0800, Dan Williams wrote:
> > > On Wed, Dec 5, 2018 at 9:25 AM Alexander Duyck
> > > <alexander.h.duyck@linux.intel.com> wrote:
> > > > 
> > > > Add an additional bit flag to the device struct named "dead".
> > > > 
> > > > This additional flag provides a guarantee that when a device_del is
> > > > executed on a given interface an async worker will not attempt to attach
> > > > the driver following the earlier device_del call. Previously this
> > > > guarantee was not present and could result in the device_del call
> > > > attempting to remove a driver from an interface only to have the async
> > > > worker attempt to probe the driver later when it finally completes the
> > > > asynchronous probe call.
> > > > 
> > > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > > > ---
> > > >  drivers/base/core.c    |   11 +++++++++++
> > > >  drivers/base/dd.c      |    8 ++++++--
> > > >  include/linux/device.h |    5 +++++
> > > >  3 files changed, 22 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > > > index f3e6ca4170b4..70358327303b 100644
> > > > --- a/drivers/base/core.c
> > > > +++ b/drivers/base/core.c
> > > > @@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
> > > >         struct kobject *glue_dir = NULL;
> > > >         struct class_interface *class_intf;
> > > > 
> > > > +       /*
> > > > +        * Hold the device lock and set the "dead" flag to guarantee that
> > > > +        * the update behavior is consistent with the other bitfields near
> > > > +        * it and that we cannot have an asynchronous probe routine trying
> > > > +        * to run while we are tearing out the bus/class/sysfs from
> > > > +        * underneath the device.
> > > > +        */
> > > > +       device_lock(dev);
> > > > +       dev->dead = true;
> > > > +       device_unlock(dev);
> > > > +
> > > >         /* Notify clients of device removal.  This call must come
> > > >          * before dpm_sysfs_remove().
> > > >          */
> > > > diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> > > > index 88713f182086..3bb8c3e0f3da 100644
> > > > --- a/drivers/base/dd.c
> > > > +++ b/drivers/base/dd.c
> > > > @@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
> > > > 
> > > >         device_lock(dev);
> > > > 
> > > > +       /* device is or has been removed from the bus, just bail out */
> > > > +       if (dev->dead)
> > > > +               goto out_unlock;
> > > > +
> > > 
> > > What do you think about moving this check into
> > > __device_attach_driver() alongside all the other checks? That way we
> > > also get ->dead checking through the __device_attach() path.
> > 
> > I'm not really sure that is the best spot to do that. Part of the
> > reason being that by placing it where I did we avoid messing with the
> > runtime power management for the parent if it was already powered off.
> 
> ...but this already a rare event and the parent shouldn't otherwise be
> bothered by a spurious pm_runtime wakeup event.
> 
> > If anything I would say we could probably look at pulling the check out
> > and placing the driver check in __device_attach_async_helper since from
> > what I can tell the check is actually redundant in the non-async path
> > anyway since __device_attach already had taken the device lock and
> > checked dev->driver prior to calling __device_attach_driver.
> > 
> > > ...and after that maybe it could be made a common helper
> > > (dev_driver_checks()?) shared between __device_attach_driver() and
> > > __driver_attach() to reduce some duplication.
> > 
> > I'm not sure consolidating it into a function would really be worth the
> > extra effort. It would essentially just obfuscate the checks and I am
> > not sure you really save much with:
> >         if (dev_driver_checks(dev))
> > vs:
> >         if (!dev->dead && !dev->driver)
> > 
> > By the time you create the function and replace the few spots that are
> > making these checks you would end up most likely adding more complexity
> > to the kernel rather than reducing it any.
> 
> No, I was talking about removing this duplication in
> __device_attach_driver() and __driver_attach():
> 
>         if (ret == 0) {
>                 /* no match */
>                 return 0;
>         } else if (ret == -EPROBE_DEFER) {
>                 dev_dbg(dev, "Device match requests probe deferral\n");
>                 driver_deferred_probe_add(dev);

Is this bit of code correct? Seems like there should be a return here
doesn't it?

I just double checked and this is what is in the kernel too.

>         } else if (ret < 0) {
>                 dev_dbg(dev, "Bus failed to match device: %d", ret);
>                 return ret;
>         } /* ret > 0 means positive match */
> 
> ...and lead in with a dev->dead check.

I would think that we would want to check for dev->dead before we even
call driver_match_device. That way we don't have the match function
crawling around a device that is being disassembled. Is that what you
were referring to?

Also the context for the two functions seems to be a bit different. In
the case of __device_attach_driver the device_lock is already held. In
__driver_attach the lock on the device isn't taken until after a match
has been found.




^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 20:57         ` Alexander Duyck
@ 2018-12-10 21:15           ` Dan Williams
  2018-12-10 21:23             ` Dan Williams
  0 siblings, 1 reply; 21+ messages in thread
From: Dan Williams @ 2018-12-10 21:15 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, Dec 10, 2018 at 12:58 PM Alexander Duyck
<alexander.h.duyck@linux.intel.com> wrote:
>
> On Mon, 2018-12-10 at 11:43 -0800, Dan Williams wrote:
> > On Mon, Dec 10, 2018 at 11:35 AM Alexander Duyck
> > <alexander.h.duyck@linux.intel.com> wrote:
> > >
> > > On Mon, 2018-12-10 at 10:58 -0800, Dan Williams wrote:
> > > > On Wed, Dec 5, 2018 at 9:25 AM Alexander Duyck
> > > > <alexander.h.duyck@linux.intel.com> wrote:
> > > > >
> > > > > Add an additional bit flag to the device struct named "dead".
> > > > >
> > > > > This additional flag provides a guarantee that when a device_del is
> > > > > executed on a given interface an async worker will not attempt to attach
> > > > > the driver following the earlier device_del call. Previously this
> > > > > guarantee was not present and could result in the device_del call
> > > > > attempting to remove a driver from an interface only to have the async
> > > > > worker attempt to probe the driver later when it finally completes the
> > > > > asynchronous probe call.
> > > > >
> > > > > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> > > > > ---
> > > > >  drivers/base/core.c    |   11 +++++++++++
> > > > >  drivers/base/dd.c      |    8 ++++++--
> > > > >  include/linux/device.h |    5 +++++
> > > > >  3 files changed, 22 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/base/core.c b/drivers/base/core.c
> > > > > index f3e6ca4170b4..70358327303b 100644
> > > > > --- a/drivers/base/core.c
> > > > > +++ b/drivers/base/core.c
> > > > > @@ -2075,6 +2075,17 @@ void device_del(struct device *dev)
> > > > >         struct kobject *glue_dir = NULL;
> > > > >         struct class_interface *class_intf;
> > > > >
> > > > > +       /*
> > > > > +        * Hold the device lock and set the "dead" flag to guarantee that
> > > > > +        * the update behavior is consistent with the other bitfields near
> > > > > +        * it and that we cannot have an asynchronous probe routine trying
> > > > > +        * to run while we are tearing out the bus/class/sysfs from
> > > > > +        * underneath the device.
> > > > > +        */
> > > > > +       device_lock(dev);
> > > > > +       dev->dead = true;
> > > > > +       device_unlock(dev);
> > > > > +
> > > > >         /* Notify clients of device removal.  This call must come
> > > > >          * before dpm_sysfs_remove().
> > > > >          */
> > > > > diff --git a/drivers/base/dd.c b/drivers/base/dd.c
> > > > > index 88713f182086..3bb8c3e0f3da 100644
> > > > > --- a/drivers/base/dd.c
> > > > > +++ b/drivers/base/dd.c
> > > > > @@ -774,6 +774,10 @@ static void __device_attach_async_helper(void *_dev, async_cookie_t cookie)
> > > > >
> > > > >         device_lock(dev);
> > > > >
> > > > > +       /* device is or has been removed from the bus, just bail out */
> > > > > +       if (dev->dead)
> > > > > +               goto out_unlock;
> > > > > +
> > > >
> > > > What do you think about moving this check into
> > > > __device_attach_driver() alongside all the other checks? That way we
> > > > also get ->dead checking through the __device_attach() path.
> > >
> > > I'm not really sure that is the best spot to do that. Part of the
> > > reason being that by placing it where I did we avoid messing with the
> > > runtime power management for the parent if it was already powered off.
> >
> > ...but this already a rare event and the parent shouldn't otherwise be
> > bothered by a spurious pm_runtime wakeup event.
> >
> > > If anything I would say we could probably look at pulling the check out
> > > and placing the driver check in __device_attach_async_helper since from
> > > what I can tell the check is actually redundant in the non-async path
> > > anyway since __device_attach already had taken the device lock and
> > > checked dev->driver prior to calling __device_attach_driver.
> > >
> > > > ...and after that maybe it could be made a common helper
> > > > (dev_driver_checks()?) shared between __device_attach_driver() and
> > > > __driver_attach() to reduce some duplication.
> > >
> > > I'm not sure consolidating it into a function would really be worth the
> > > extra effort. It would essentially just obfuscate the checks and I am
> > > not sure you really save much with:
> > >         if (dev_driver_checks(dev))
> > > vs:
> > >         if (!dev->dead && !dev->driver)
> > >
> > > By the time you create the function and replace the few spots that are
> > > making these checks you would end up most likely adding more complexity
> > > to the kernel rather than reducing it any.
> >
> > No, I was talking about removing this duplication in
> > __device_attach_driver() and __driver_attach():
> >
> >         if (ret == 0) {
> >                 /* no match */
> >                 return 0;
> >         } else if (ret == -EPROBE_DEFER) {
> >                 dev_dbg(dev, "Device match requests probe deferral\n");
> >                 driver_deferred_probe_add(dev);
>
> Is this bit of code correct? Seems like there should be a return here
> doesn't it?

It does look odd, but I think it's ok as the driver is expected to
have its probe routine called multiple times and return -EPROBE_DEFER
if it's not ready yet.

> I just double checked and this is what is in the kernel too.

Yeah, I just copy-pasted it, but it might deserve a comment that the
fallthrough / no return is on purpose.

> >         } else if (ret < 0) {
> >                 dev_dbg(dev, "Bus failed to match device: %d", ret);
> >                 return ret;
> >         } /* ret > 0 means positive match */
> >
> > ...and lead in with a dev->dead check.
>
> I would think that we would want to check for dev->dead before we even
> call driver_match_device. That way we don't have the match function
> crawling around a device that is being disassembled. Is that what you
> were referring to?

I wasn't too concerned about optimizing the case where the probe path
loses the race with device_del().

> Also the context for the two functions seems to be a bit different. In
> the case of __device_attach_driver the device_lock is already held. In
> __driver_attach the lock on the device isn't taken until after a match
> has been found.

Yes, I was only pattern matching when looking at the context of where
dev->dead is checked in __driver_attach() and wondering why it was
checked outside of __device_attach_driver()

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 21:15           ` Dan Williams
@ 2018-12-10 21:23             ` Dan Williams
  2018-12-10 22:24               ` Alexander Duyck
  0 siblings, 1 reply; 21+ messages in thread
From: Dan Williams @ 2018-12-10 21:23 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, Dec 10, 2018 at 1:15 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Mon, Dec 10, 2018 at 12:58 PM Alexander Duyck
> <alexander.h.duyck@linux.intel.com> wrote:
[..]
> > Also the context for the two functions seems to be a bit different. In
> > the case of __device_attach_driver the device_lock is already held. In
> > __driver_attach the lock on the device isn't taken until after a match
> > has been found.
>
> Yes, I was only pattern matching when looking at the context of where
> dev->dead is checked in __driver_attach() and wondering why it was
> checked outside of __device_attach_driver()

...and now I realize the bigger point of your concern, we need to
check dev->dead after acquiring the device_lock otherwise the race is
back. We can defer that consolidation, but the larger concern of
making it internal to __device_attach_driver() still stands.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 21:23             ` Dan Williams
@ 2018-12-10 22:24               ` Alexander Duyck
  2018-12-10 22:41                 ` Dan Williams
  0 siblings, 1 reply; 21+ messages in thread
From: Alexander Duyck @ 2018-12-10 22:24 UTC (permalink / raw)
  To: Dan Williams
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, 2018-12-10 at 13:23 -0800, Dan Williams wrote:
> On Mon, Dec 10, 2018 at 1:15 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > 
> > On Mon, Dec 10, 2018 at 12:58 PM Alexander Duyck
> > <alexander.h.duyck@linux.intel.com> wrote:
> 
> [..]
> > > Also the context for the two functions seems to be a bit different. In
> > > the case of __device_attach_driver the device_lock is already held. In
> > > __driver_attach the lock on the device isn't taken until after a match
> > > has been found.
> > 
> > Yes, I was only pattern matching when looking at the context of where
> > dev->dead is checked in __driver_attach() and wondering why it was
> > checked outside of __device_attach_driver()
> 
> ...and now I realize the bigger point of your concern, we need to
> check dev->dead after acquiring the device_lock otherwise the race is
> back. We can defer that consolidation, but the larger concern of
> making it internal to __device_attach_driver() still stands.

I'm still not a fan of moving it into __device_attach_driver. I would
much rather pull out the dev->driver check and instead place that in
__device_attach_async_helper.

The __device_attach function as I said took the device_lock and had
already checked dev->driver. So in the non-async path it shouldn't be
possible for dev->driver to ever be set anyway. In addition
__device_attach_driver is called once for each driver on a given bus,
so dropping the test should reduce driver load time since it is one
less test that has to be performed per driver.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag
  2018-12-10 22:24               ` Alexander Duyck
@ 2018-12-10 22:41                 ` Dan Williams
  0 siblings, 0 replies; 21+ messages in thread
From: Dan Williams @ 2018-12-10 22:41 UTC (permalink / raw)
  To: alexander.h.duyck
  Cc: Linux Kernel Mailing List, Greg KH, Luis R. Rodriguez,
	linux-nvdimm, Tejun Heo, Andrew Morton, Linux-pm mailing list,
	jiangshanlai, Rafael J. Wysocki, Brown, Len, Pavel Machek,
	zwisler, Dave Jiang, bvanassche

On Mon, Dec 10, 2018 at 2:24 PM Alexander Duyck
<alexander.h.duyck@linux.intel.com> wrote:
>
> On Mon, 2018-12-10 at 13:23 -0800, Dan Williams wrote:
> > On Mon, Dec 10, 2018 at 1:15 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > On Mon, Dec 10, 2018 at 12:58 PM Alexander Duyck
> > > <alexander.h.duyck@linux.intel.com> wrote:
> >
> > [..]
> > > > Also the context for the two functions seems to be a bit different. In
> > > > the case of __device_attach_driver the device_lock is already held. In
> > > > __driver_attach the lock on the device isn't taken until after a match
> > > > has been found.
> > >
> > > Yes, I was only pattern matching when looking at the context of where
> > > dev->dead is checked in __driver_attach() and wondering why it was
> > > checked outside of __device_attach_driver()
> >
> > ...and now I realize the bigger point of your concern, we need to
> > check dev->dead after acquiring the device_lock otherwise the race is
> > back. We can defer that consolidation, but the larger concern of
> > making it internal to __device_attach_driver() still stands.
>
> I'm still not a fan of moving it into __device_attach_driver. I would
> much rather pull out the dev->driver check and instead place that in
> __device_attach_async_helper.
>
> The __device_attach function as I said took the device_lock and had
> already checked dev->driver. So in the non-async path it shouldn't be
> possible for dev->driver to ever be set anyway.

True.

> In addition
> __device_attach_driver is called once for each driver on a given bus,
> so dropping the test should reduce driver load time since it is one
> less test that has to be performed per driver.

Ok. You can add my Reviewed-by.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls
  2018-12-10 19:22 ` [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Luis Chamberlain
@ 2018-12-10 23:25   ` Alexander Duyck
  2018-12-10 23:35     ` Luis Chamberlain
  0 siblings, 1 reply; 21+ messages in thread
From: Alexander Duyck @ 2018-12-10 23:25 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: linux-kernel, gregkh, linux-nvdimm, tj, akpm, linux-pm,
	jiangshanlai, rafael, len.brown, pavel, zwisler, dan.j.williams,
	dave.jiang, bvanassche

On Mon, 2018-12-10 at 11:22 -0800, Luis Chamberlain wrote:
> On Wed, Dec 05, 2018 at 09:25:13AM -0800, Alexander Duyck wrote:
> > This patch set provides functionality that will help to improve the
> > locality of the async_schedule calls used to provide deferred
> > initialization.
> > 
> > This patch set originally started out focused on just the one call to
> > async_schedule_domain in the nvdimm tree that was being used to defer the
> > device_add call however after doing some digging I realized the scope of
> > this was much broader than I had originally planned. As such I went
> > through and reworked the underlying infrastructure down to replacing the
> > queue_work call itself with a function of my own and opted to try and
> > provide a NUMA aware solution that would work for a broader audience.
> > 
> > In addition I have added several tweaks and/or clean-ups to the front of the
> > patch set. Patches 1 through 4 address a number of issues that actually were
> > causing the existing async_schedule calls to not show the performance that
> > they could due to either not scaling on a per device basis, or due to issues
> > that could result in a potential deadlock. For example, patch 4 addresses the
> > fact that we were calling async_schedule once per driver instead of once
> > per device, and as a result we would have still ended up with devices
> > being probed on a non-local node without addressing this first.
> 
> No tests were added. Again, I think it would be good to add test
> cases to showcase the old mechanisms, illustrate the new, and ensure
> we don't regress both now and also help us ensure we don't regress
> moving forward.
> 
> This is all too critical of a path for the kernel, and these changes
> are rather instrusive. I'd readlly like to see test code for it now
> rather than later.
> 
>   Luis

Sorry about that. I was more focused on the rewrite of patch 2 and
overlooked the comment about lib/test_kmod.c.

I'll look into it and see if I can squeeze it in for v9.

Thanks.

- Alex


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls
  2018-12-10 23:25   ` Alexander Duyck
@ 2018-12-10 23:35     ` Luis Chamberlain
  0 siblings, 0 replies; 21+ messages in thread
From: Luis Chamberlain @ 2018-12-10 23:35 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: linux-kernel, gregkh, linux-nvdimm, tj, akpm, linux-pm,
	jiangshanlai, rafael, len.brown, pavel, zwisler, dan.j.williams,
	dave.jiang, bvanassche

On Mon, Dec 10, 2018 at 03:25:04PM -0800, Alexander Duyck wrote:
> On Mon, 2018-12-10 at 11:22 -0800, Luis Chamberlain wrote:
> > On Wed, Dec 05, 2018 at 09:25:13AM -0800, Alexander Duyck wrote:
> > > This patch set provides functionality that will help to improve the
> > > locality of the async_schedule calls used to provide deferred
> > > initialization.
> > > 
> > > This patch set originally started out focused on just the one call to
> > > async_schedule_domain in the nvdimm tree that was being used to defer the
> > > device_add call however after doing some digging I realized the scope of
> > > this was much broader than I had originally planned. As such I went
> > > through and reworked the underlying infrastructure down to replacing the
> > > queue_work call itself with a function of my own and opted to try and
> > > provide a NUMA aware solution that would work for a broader audience.
> > > 
> > > In addition I have added several tweaks and/or clean-ups to the front of the
> > > patch set. Patches 1 through 4 address a number of issues that actually were
> > > causing the existing async_schedule calls to not show the performance that
> > > they could due to either not scaling on a per device basis, or due to issues
> > > that could result in a potential deadlock. For example, patch 4 addresses the
> > > fact that we were calling async_schedule once per driver instead of once
> > > per device, and as a result we would have still ended up with devices
> > > being probed on a non-local node without addressing this first.
> > 
> > No tests were added. Again, I think it would be good to add test
> > cases to showcase the old mechanisms, illustrate the new, and ensure
> > we don't regress both now and also help us ensure we don't regress
> > moving forward.
> > 
> > This is all too critical of a path for the kernel, and these changes
> > are rather instrusive. I'd readlly like to see test code for it now
> > rather than later.
> > 
> >   Luis
> 
> Sorry about that. I was more focused on the rewrite of patch 2 and
> overlooked the comment about lib/test_kmod.c.
> 
> I'll look into it and see if I can squeeze it in for v9.

Superb!

  Luis

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2018-12-10 23:35 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-05 17:25 [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 1/9] driver core: Move async_synchronize_full call Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 2/9] driver core: Establish order of operations for device_add and device_del via bitflag Alexander Duyck
2018-12-10 18:58   ` Dan Williams
2018-12-10 19:35     ` Alexander Duyck
2018-12-10 19:43       ` Dan Williams
2018-12-10 20:57         ` Alexander Duyck
2018-12-10 21:15           ` Dan Williams
2018-12-10 21:23             ` Dan Williams
2018-12-10 22:24               ` Alexander Duyck
2018-12-10 22:41                 ` Dan Williams
2018-12-05 17:25 ` [driver-core PATCH v8 3/9] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 4/9] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 5/9] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 6/9] async: Add support for queueing on specific " Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 7/9] driver core: Attach devices on CPU local to device node Alexander Duyck
2018-12-05 17:25 ` [driver-core PATCH v8 8/9] PM core: Use new async_schedule_dev command Alexander Duyck
2018-12-05 17:26 ` [driver-core PATCH v8 9/9] libnvdimm: Schedule device registration on node local to the device Alexander Duyck
2018-12-10 19:22 ` [driver-core PATCH v8 0/9] Add NUMA aware async_schedule calls Luis Chamberlain
2018-12-10 23:25   ` Alexander Duyck
2018-12-10 23:35     ` Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).