All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Brown, Len" <len.brown@intel.com>,
	Linux-pm mailing list <linux-pm@vger.kernel.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	jiangshanlai@gmail.com,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	zwisler@kernel.org, Pavel Machek <pavel@ucw.cz>,
	Tejun Heo <tj@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>
Subject: Re: [RFC workqueue/driver-core PATCH 3/5] driver core: Probe devices asynchronously instead of the driver
Date: Thu, 27 Sep 2018 08:27:55 -0700	[thread overview]
Message-ID: <021d55fb-9f6a-0b52-3513-e9c5493bd7d7@linux.intel.com> (raw)
In-Reply-To: <CAPcyv4h5U4Fph52H80QodBRXK+PjS6Zw_6qK2+DXtr=qZT7Gzw@mail.gmail.com>



On 9/26/2018 5:48 PM, Dan Williams wrote:
> On Wed, Sep 26, 2018 at 2:51 PM Alexander Duyck
> <alexander.h.duyck@linux.intel.com> wrote:
>>
>> This change makes it so that we probe devices asynchronously instead of the
>> driver. This results in us seeing the same behavior if the device is
>> registered before the driver or after. This way we can avoid serializing
>> the initialization should the driver not be loaded until after the devices
>> have already been added.
>>
>> The motivation behind this is that if we have a set of devices that
>> take a significant amount of time to load we can greatly reduce the time to
>> load by processing them in parallel instead of one at a time. In addition,
>> each device can exist on a different node so placing a single thread on one
>> CPU to initialize all of the devices for a given driver can result in poor
>> performance on a system with multiple nodes.
>>
>> One issue I can see with this patch is that I am using the
>> dev_set/get_drvdata functions to store the driver in the device while I am
>> waiting on the asynchronous init to complete. For now I am protecting it by
>> using the lack of a dev->driver and the device lock.
>>
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> [..]
>> @@ -891,6 +914,25 @@ static int __driver_attach(struct device *dev, void *data)
>>                  return ret;
>>          } /* ret > 0 means positive match */
>>
>> +       if (driver_allows_async_probing(drv)) {
>> +               /*
>> +                * Instead of probing the device synchronously we will
>> +                * probe it asynchronously to allow for more parallelism.
>> +                *
>> +                * We only take the device lock here in order to guarantee
>> +                * that the dev->driver and driver_data fields are protected
>> +                */
>> +               dev_dbg(dev, "scheduling asynchronous probe\n");
>> +               device_lock(dev);
>> +               if (!dev->driver) {
>> +                       get_device(dev);
>> +                       dev_set_drvdata(dev, drv);
>> +                       async_schedule(__driver_attach_async_helper, dev);
> 
> I'm not sure async drivers / sub-systems are ready for their devices
> to show up in parallel. While userspace should not be relying on
> kernel device names, people get upset when devices change kernel names
> from one boot to the next, and I can see this change leading to that
> scenario.

The thing is the current async behavior already does this if the driver 
is loaded before the device is added. All I am doing is making the 
behavior with the driver loaded first the standard instead of letting it 
work the other way around. This way we get consistent behavior.

> If a driver / sub-system wants more parallelism than what
> driver_allows_async_probing() provides it should do it locally, for
> example, like libata does.

So where I actually saw this was with the pmem legacy setup I had. After 
doing all the work to parallelize things in the driver it had no effect. 
That was because the nd_pmem driver wasn't loaded yet so all the 
device_add calls did is add the device but didn't attach the nd_pmem 
driver. Then when the driver loaded it serialized the probe calls 
resulting in it taking twice as long as it needed to in order to 
initialize the memory.

This seems to affect standard persistent memory as well. The only 
difference is that instead of probing the device on the first pass we 
kick it back and reprobe it in nd_pmem_probe/nd_pfn_probe in order to 
set the correct personality and that in turn allows us to asynchronously 
reschedule the work on the correct CPU and deserialize it.


_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-nvdimm <linux-nvdimm@lists.01.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Linux-pm mailing list <linux-pm@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Tejun Heo <tj@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Brown, Len" <len.brown@intel.com>,
	Dave Jiang <dave.jiang@intel.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Vishal L Verma <vishal.l.verma@intel.com>,
	jiangshanlai@gmail.com, Pavel Machek <pavel@ucw.cz>,
	zwisler@kernel.org
Subject: Re: [RFC workqueue/driver-core PATCH 3/5] driver core: Probe devices asynchronously instead of the driver
Date: Thu, 27 Sep 2018 08:27:55 -0700	[thread overview]
Message-ID: <021d55fb-9f6a-0b52-3513-e9c5493bd7d7@linux.intel.com> (raw)
In-Reply-To: <CAPcyv4h5U4Fph52H80QodBRXK+PjS6Zw_6qK2+DXtr=qZT7Gzw@mail.gmail.com>



On 9/26/2018 5:48 PM, Dan Williams wrote:
> On Wed, Sep 26, 2018 at 2:51 PM Alexander Duyck
> <alexander.h.duyck@linux.intel.com> wrote:
>>
>> This change makes it so that we probe devices asynchronously instead of the
>> driver. This results in us seeing the same behavior if the device is
>> registered before the driver or after. This way we can avoid serializing
>> the initialization should the driver not be loaded until after the devices
>> have already been added.
>>
>> The motivation behind this is that if we have a set of devices that
>> take a significant amount of time to load we can greatly reduce the time to
>> load by processing them in parallel instead of one at a time. In addition,
>> each device can exist on a different node so placing a single thread on one
>> CPU to initialize all of the devices for a given driver can result in poor
>> performance on a system with multiple nodes.
>>
>> One issue I can see with this patch is that I am using the
>> dev_set/get_drvdata functions to store the driver in the device while I am
>> waiting on the asynchronous init to complete. For now I am protecting it by
>> using the lack of a dev->driver and the device lock.
>>
>> Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> [..]
>> @@ -891,6 +914,25 @@ static int __driver_attach(struct device *dev, void *data)
>>                  return ret;
>>          } /* ret > 0 means positive match */
>>
>> +       if (driver_allows_async_probing(drv)) {
>> +               /*
>> +                * Instead of probing the device synchronously we will
>> +                * probe it asynchronously to allow for more parallelism.
>> +                *
>> +                * We only take the device lock here in order to guarantee
>> +                * that the dev->driver and driver_data fields are protected
>> +                */
>> +               dev_dbg(dev, "scheduling asynchronous probe\n");
>> +               device_lock(dev);
>> +               if (!dev->driver) {
>> +                       get_device(dev);
>> +                       dev_set_drvdata(dev, drv);
>> +                       async_schedule(__driver_attach_async_helper, dev);
> 
> I'm not sure async drivers / sub-systems are ready for their devices
> to show up in parallel. While userspace should not be relying on
> kernel device names, people get upset when devices change kernel names
> from one boot to the next, and I can see this change leading to that
> scenario.

The thing is the current async behavior already does this if the driver 
is loaded before the device is added. All I am doing is making the 
behavior with the driver loaded first the standard instead of letting it 
work the other way around. This way we get consistent behavior.

> If a driver / sub-system wants more parallelism than what
> driver_allows_async_probing() provides it should do it locally, for
> example, like libata does.

So where I actually saw this was with the pmem legacy setup I had. After 
doing all the work to parallelize things in the driver it had no effect. 
That was because the nd_pmem driver wasn't loaded yet so all the 
device_add calls did is add the device but didn't attach the nd_pmem 
driver. Then when the driver loaded it serialized the probe calls 
resulting in it taking twice as long as it needed to in order to 
initialize the memory.

This seems to affect standard persistent memory as well. The only 
difference is that instead of probing the device on the first pass we 
kick it back and reprobe it in nd_pmem_probe/nd_pfn_probe in order to 
set the correct personality and that in turn allows us to asynchronously 
reschedule the work on the correct CPU and deserialize it.



WARNING: multiple messages have this Message-ID (diff)
From: Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
To: Dan Williams <dan.j.williams-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
Cc: "Brown, Len" <len.brown-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>,
	Linux-pm mailing list
	<linux-pm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Greg KH
	<gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org>,
	linux-nvdimm
	<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>,
	jiangshanlai-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	Linux Kernel Mailing List
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	zwisler-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	Pavel Machek <pavel-+ZI9xUNit7I@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	"Rafael J. Wysocki"
	<rafael-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Subject: Re: [RFC workqueue/driver-core PATCH 3/5] driver core: Probe devices asynchronously instead of the driver
Date: Thu, 27 Sep 2018 08:27:55 -0700	[thread overview]
Message-ID: <021d55fb-9f6a-0b52-3513-e9c5493bd7d7@linux.intel.com> (raw)
In-Reply-To: <CAPcyv4h5U4Fph52H80QodBRXK+PjS6Zw_6qK2+DXtr=qZT7Gzw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>



On 9/26/2018 5:48 PM, Dan Williams wrote:
> On Wed, Sep 26, 2018 at 2:51 PM Alexander Duyck
> <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org> wrote:
>>
>> This change makes it so that we probe devices asynchronously instead of the
>> driver. This results in us seeing the same behavior if the device is
>> registered before the driver or after. This way we can avoid serializing
>> the initialization should the driver not be loaded until after the devices
>> have already been added.
>>
>> The motivation behind this is that if we have a set of devices that
>> take a significant amount of time to load we can greatly reduce the time to
>> load by processing them in parallel instead of one at a time. In addition,
>> each device can exist on a different node so placing a single thread on one
>> CPU to initialize all of the devices for a given driver can result in poor
>> performance on a system with multiple nodes.
>>
>> One issue I can see with this patch is that I am using the
>> dev_set/get_drvdata functions to store the driver in the device while I am
>> waiting on the asynchronous init to complete. For now I am protecting it by
>> using the lack of a dev->driver and the device lock.
>>
>> Signed-off-by: Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
> [..]
>> @@ -891,6 +914,25 @@ static int __driver_attach(struct device *dev, void *data)
>>                  return ret;
>>          } /* ret > 0 means positive match */
>>
>> +       if (driver_allows_async_probing(drv)) {
>> +               /*
>> +                * Instead of probing the device synchronously we will
>> +                * probe it asynchronously to allow for more parallelism.
>> +                *
>> +                * We only take the device lock here in order to guarantee
>> +                * that the dev->driver and driver_data fields are protected
>> +                */
>> +               dev_dbg(dev, "scheduling asynchronous probe\n");
>> +               device_lock(dev);
>> +               if (!dev->driver) {
>> +                       get_device(dev);
>> +                       dev_set_drvdata(dev, drv);
>> +                       async_schedule(__driver_attach_async_helper, dev);
> 
> I'm not sure async drivers / sub-systems are ready for their devices
> to show up in parallel. While userspace should not be relying on
> kernel device names, people get upset when devices change kernel names
> from one boot to the next, and I can see this change leading to that
> scenario.

The thing is the current async behavior already does this if the driver 
is loaded before the device is added. All I am doing is making the 
behavior with the driver loaded first the standard instead of letting it 
work the other way around. This way we get consistent behavior.

> If a driver / sub-system wants more parallelism than what
> driver_allows_async_probing() provides it should do it locally, for
> example, like libata does.

So where I actually saw this was with the pmem legacy setup I had. After 
doing all the work to parallelize things in the driver it had no effect. 
That was because the nd_pmem driver wasn't loaded yet so all the 
device_add calls did is add the device but didn't attach the nd_pmem 
driver. Then when the driver loaded it serialized the probe calls 
resulting in it taking twice as long as it needed to in order to 
initialize the memory.

This seems to affect standard persistent memory as well. The only 
difference is that instead of probing the device on the first pass we 
kick it back and reprobe it in nd_pmem_probe/nd_pfn_probe in order to 
set the correct personality and that in turn allows us to asynchronously 
reschedule the work on the correct CPU and deserialize it.

  reply	other threads:[~2018-09-27 15:27 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-26 21:51 [RFC workqueue/driver-core PATCH 0/5] Add NUMA aware async_schedule calls Alexander Duyck
2018-09-26 21:51 ` Alexander Duyck
2018-09-26 21:51 ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide queue_work_near to queue work near a given NUMA node Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:53   ` Tejun Heo
2018-09-26 21:53     ` Tejun Heo
2018-09-26 21:53     ` Tejun Heo
2018-09-26 22:05     ` Alexander Duyck
2018-09-26 22:05       ` Alexander Duyck
2018-09-26 22:09       ` Tejun Heo
2018-09-26 22:09         ` Tejun Heo
2018-09-26 22:09         ` Tejun Heo
2018-09-26 22:19         ` Alexander Duyck
2018-09-26 22:19           ` Alexander Duyck
2018-10-01 16:01           ` Tejun Heo
2018-10-01 16:01             ` Tejun Heo
2018-10-01 16:01             ` Tejun Heo
2018-10-01 21:54             ` Alexander Duyck
2018-10-01 21:54               ` Alexander Duyck
2018-10-01 21:54               ` Alexander Duyck
2018-10-02 17:41               ` Tejun Heo
2018-10-02 17:41                 ` Tejun Heo
2018-10-02 17:41                 ` Tejun Heo
2018-10-02 18:23                 ` Alexander Duyck
2018-10-02 18:23                   ` Alexander Duyck
2018-10-02 18:23                   ` Alexander Duyck
2018-10-02 18:41                   ` Tejun Heo
2018-10-02 18:41                     ` Tejun Heo
2018-10-02 18:41                     ` Tejun Heo
2018-10-02 20:49                     ` Alexander Duyck
2018-10-02 20:49                       ` Alexander Duyck
2018-10-02 20:49                       ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 2/5] async: Add support for queueing on specific " Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-27  0:31   ` Dan Williams
2018-09-27  0:31     ` Dan Williams
2018-09-27  0:31     ` Dan Williams
2018-09-27 15:16     ` Alexander Duyck
2018-09-27 15:16       ` Alexander Duyck
2018-09-27 15:16       ` Alexander Duyck
2018-09-27 19:48       ` Dan Williams
2018-09-27 19:48         ` Dan Williams
2018-09-27 20:03         ` Alexander Duyck
2018-09-27 20:03           ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 3/5] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-27  0:48   ` Dan Williams
2018-09-27  0:48     ` Dan Williams
2018-09-27  0:48     ` Dan Williams
2018-09-27 15:27     ` Alexander Duyck [this message]
2018-09-27 15:27       ` Alexander Duyck
2018-09-27 15:27       ` Alexander Duyck
2018-09-28  2:48       ` Dan Williams
2018-09-28  2:48         ` Dan Williams
2018-09-28  2:48         ` Dan Williams
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 4/5] driver core: Use new async_schedule_dev command Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-28 17:42   ` Dan Williams
2018-09-28 17:42     ` Dan Williams
2018-09-28 17:42     ` Dan Williams
2018-09-26 21:52 ` [RFC workqueue/driver-core PATCH 5/5] nvdimm: Schedule device registration on node local to the device Alexander Duyck
2018-09-26 21:52   ` Alexander Duyck
2018-09-26 21:52   ` Alexander Duyck
2018-09-28 17:46   ` Dan Williams
2018-09-28 17:46     ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=021d55fb-9f6a-0b52-3513-e9c5493bd7d7@linux.intel.com \
    --to=alexander.h.duyck@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=jiangshanlai@gmail.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=rafael@kernel.org \
    --cc=tj@kernel.org \
    --cc=zwisler@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.