linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: linux-kernel@vger.kernel.org, gregkh@linuxfoundation.org
Cc: len.brown@intel.com, bvanassche@acm.org,
	linux-pm@vger.kernel.org, alexander.h.duyck@linux.intel.com,
	linux-nvdimm@lists.01.org, jiangshanlai@gmail.com,
	mcgrof@kernel.org, pavel@ucw.cz, zwisler@kernel.org,
	tj@kernel.org, akpm@linux-foundation.org, rafael@kernel.org
Subject: [driver-core PATCH v7 0/9] Add NUMA aware async_schedule calls
Date: Wed, 28 Nov 2018 16:32:06 -0800	[thread overview]
Message-ID: <154345118835.18040.17186161872550839244.stgit@ahduyck-desk1.amr.corp.intel.com> (raw)

This patch set provides functionality that will help to improve the
locality of the async_schedule calls used to provide deferred
initialization.

This patch set originally started out focused on just the one call to
async_schedule_domain in the nvdimm tree that was being used to defer the
device_add call however after doing some digging I realized the scope of
this was much broader than I had originally planned. As such I went
through and reworked the underlying infrastructure down to replacing the
queue_work call itself with a function of my own and opted to try and
provide a NUMA aware solution that would work for a broader audience.

In addition I have added several tweaks and/or clean-ups to the front of the
patch set. Patches 1 through 4 address a number of issues that actually were
causing the existing async_schedule calls to not show the performance that
they could due to either not scaling on a per device basis, or due to issues
that could result in a potential deadlock. For example, patch 4 addresses the
fact that we were calling async_schedule once per driver instead of once
per device, and as a result we would have still ended up with devices
being probed on a non-local node without addressing this first.

RFC->v1:
    Dropped nvdimm patch to submit later.
        It relies on code in libnvdimm development tree.
    Simplified queue_work_near to just convert node into a CPU.
    Split up drivers core and PM core patches.
v1->v2:
    Renamed queue_work_near to queue_work_node
    Added WARN_ON_ONCE if we use queue_work_node with per-cpu workqueue
v2->v3:
    Added Acked-by for queue_work_node patch
    Continued rename from _near to _node to be consistent with queue_work_node
        Renamed async_schedule_near_domain to async_schedule_node_domain
        Renamed async_schedule_near to async_schedule_node
    Added kerneldoc for new async_schedule_XXX functions
    Updated patch description for patch 4 to include data on potential gains
v3->v4
    Added patch to consolidate use of need_parent_lock
    Make asynchronous driver probing explicit about use of drvdata
v4->v5
    Added patch to move async_synchronize_full to address deadlock
    Added bit async_probe to act as mutex for probe/remove calls
    Added back nvdimm patch as code it relies on is now in Linus's tree
    Incorporated review comments on parent & device locking consolidation
    Rebased on latest linux-next
v5->v6:
    Drop the "This patch" or "This change" from start of patch descriptions.
    Drop unnecessary parenthesis in first patch
    Use same wording for "selecting a CPU" in comments added in first patch
    Added kernel documentation for async_probe member of device
    Fixed up comments for async_schedule calls in patch 2
    Moved code related setting async driver out of device.h and into dd.c
    Added Reviewed-by for several patches
v6->v7:
    Fixed typo which had kernel doc refer to "lock" when I meant "unlock"
    Dropped "bool X:1" to "u8 X:1" from patch description
    Added async_driver to device_private structure to store driver
    Dropped unecessary code shuffle from async_probe patch
    Reordered patches to move fixes up to front
    Added Reviewed-by for several patches
    Updated cover page and patch descriptions throughout the set

---

Alexander Duyck (9):
      driver core: Move async_synchronize_full call
      driver core: Establish clear order of operations for deferred probe and remove
      device core: Consolidate locking and unlocking of parent and device
      driver core: Probe devices asynchronously instead of the driver
      workqueue: Provide queue_work_node to queue work near a given NUMA node
      async: Add support for queueing on specific NUMA node
      driver core: Attach devices on CPU local to device node
      PM core: Use new async_schedule_dev command
      libnvdimm: Schedule device registration on node local to the device


 drivers/base/base.h       |    4 +
 drivers/base/bus.c        |   46 ++---------
 drivers/base/dd.c         |  182 +++++++++++++++++++++++++++++++++++++++------
 drivers/base/power/main.c |   12 +--
 drivers/nvdimm/bus.c      |   11 ++-
 include/linux/async.h     |   82 ++++++++++++++++++++
 include/linux/device.h    |    3 +
 include/linux/workqueue.h |    2 
 kernel/async.c            |   53 +++++++------
 kernel/workqueue.c        |   84 +++++++++++++++++++++
 10 files changed, 380 insertions(+), 99 deletions(-)

--
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

             reply	other threads:[~2018-11-29  0:32 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-29  0:32 Alexander Duyck [this message]
2018-11-29  0:32 ` [driver-core PATCH v7 1/9] driver core: Move async_synchronize_full call Alexander Duyck
2018-11-30 23:21   ` Luis Chamberlain
2018-11-29  0:32 ` [driver-core PATCH v7 2/9] driver core: Establish clear order of operations for deferred probe and remove Alexander Duyck
2018-11-29  1:57   ` Dan Williams
2018-11-29 18:07     ` Alexander Duyck
2018-11-29 18:55       ` Dan Williams
2018-11-29 21:53         ` Alexander Duyck
2018-11-29 22:00           ` Dan Williams
2018-11-30 23:40   ` Luis Chamberlain
2018-11-29  0:32 ` [driver-core PATCH v7 3/9] device core: Consolidate locking and unlocking of parent and device Alexander Duyck
2018-12-01  0:01   ` Luis Chamberlain
2018-11-29  0:32 ` [driver-core PATCH v7 4/9] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
2018-12-01  2:48   ` Luis Chamberlain
2018-12-03 16:44     ` Alexander Duyck
2018-11-29  0:32 ` [driver-core PATCH v7 5/9] workqueue: Provide queue_work_node to queue work near a given NUMA node Alexander Duyck
2018-11-29  0:32 ` [driver-core PATCH v7 6/9] async: Add support for queueing on specific " Alexander Duyck
2018-11-29  0:32 ` [driver-core PATCH v7 7/9] driver core: Attach devices on CPU local to device node Alexander Duyck
2018-11-29  0:32 ` [driver-core PATCH v7 8/9] PM core: Use new async_schedule_dev command Alexander Duyck
2018-11-29  0:32 ` [driver-core PATCH v7 9/9] libnvdimm: Schedule device registration on node local to the device Alexander Duyck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=154345118835.18040.17186161872550839244.stgit@ahduyck-desk1.amr.corp.intel.com \
    --to=alexander.h.duyck@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bvanassche@acm.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jiangshanlai@gmail.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mcgrof@kernel.org \
    --cc=pavel@ucw.cz \
    --cc=rafael@kernel.org \
    --cc=tj@kernel.org \
    --cc=zwisler@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).