All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Tejun Heo <tj@kernel.org>
Cc: len.brown@intel.com, linux-pm@vger.kernel.org,
	gregkh@linuxfoundation.org, linux-nvdimm@lists.01.org,
	jiangshanlai@gmail.com, linux-kernel@vger.kernel.org,
	zwisler@kernel.org, pavel@ucw.cz, rafael@kernel.org,
	akpm@linux-foundation.org
Subject: Re: [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide queue_work_near to queue work near a given NUMA node
Date: Tue, 2 Oct 2018 11:23:26 -0700	[thread overview]
Message-ID: <be9081de-f186-b265-934d-78cec2a8792f@linux.intel.com> (raw)
In-Reply-To: <20181002174116.GG270328@devbig004.ftw2.facebook.com>

On 10/2/2018 10:41 AM, Tejun Heo wrote:
> Hello,
> 
> On Mon, Oct 01, 2018 at 02:54:39PM -0700, Alexander Duyck wrote:
>>> It might be better to leave queue_work_on() to be used for per-cpu
>>> workqueues and introduce queue_work_near() as you suggseted.  I just
>>> don't want it to duplicate the node selection code in it.  Would that
>>> work?
>>
>> So if I understand what you are saying correctly we default to
>> round-robin on a given node has no CPUs attached to it. I could
>> probably work with that if that is the default behavior instead of
>> adding much of the complexity I already have.
> 
> Yeah, it's all in wq_select_unbound_cpu().  Right now, if the
> requested cpu isn't in wq_unbound_cpumask, it falls back to dumb
> round-robin.  We can probably do better there and find the nearest
> node considering topology.

Well if we could get wq_select_unbound_cpu doing the right thing based 
on node topology that would be most of my work solved right there. 
Basically I could just pass WQ_CPU_UNBOUND with the correct node and it 
would take care of getting to the right CPU.

>> The question I have then is what should I do about workqueues that
>> aren't WQ_UNBOUND if they attempt to use queue_work_near? In that
> 
> Hmm... yeah, let's just use queue_work_on() for now.  We can sort it
> out later and users could already do that anyway.
> 
> Thanks.

So are you saying I should just return an error for now if somebody 
tries to use something other than an unbound workqueue with 
queue_work_near, and expect everyone else to just use queue_work_on for 
the other workqueue types?

Thanks.

- Alex
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Duyck <alexander.h.duyck@linux.intel.com>
To: Tejun Heo <tj@kernel.org>
Cc: linux-nvdimm@lists.01.org, gregkh@linuxfoundation.org,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, len.brown@intel.com,
	dave.jiang@intel.com, rafael@kernel.org,
	vishal.l.verma@intel.com, jiangshanlai@gmail.com, pavel@ucw.cz,
	zwisler@kernel.org, dan.j.williams@intel.com
Subject: Re: [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide queue_work_near to queue work near a given NUMA node
Date: Tue, 2 Oct 2018 11:23:26 -0700	[thread overview]
Message-ID: <be9081de-f186-b265-934d-78cec2a8792f@linux.intel.com> (raw)
In-Reply-To: <20181002174116.GG270328@devbig004.ftw2.facebook.com>

On 10/2/2018 10:41 AM, Tejun Heo wrote:
> Hello,
> 
> On Mon, Oct 01, 2018 at 02:54:39PM -0700, Alexander Duyck wrote:
>>> It might be better to leave queue_work_on() to be used for per-cpu
>>> workqueues and introduce queue_work_near() as you suggseted.  I just
>>> don't want it to duplicate the node selection code in it.  Would that
>>> work?
>>
>> So if I understand what you are saying correctly we default to
>> round-robin on a given node has no CPUs attached to it. I could
>> probably work with that if that is the default behavior instead of
>> adding much of the complexity I already have.
> 
> Yeah, it's all in wq_select_unbound_cpu().  Right now, if the
> requested cpu isn't in wq_unbound_cpumask, it falls back to dumb
> round-robin.  We can probably do better there and find the nearest
> node considering topology.

Well if we could get wq_select_unbound_cpu doing the right thing based 
on node topology that would be most of my work solved right there. 
Basically I could just pass WQ_CPU_UNBOUND with the correct node and it 
would take care of getting to the right CPU.

>> The question I have then is what should I do about workqueues that
>> aren't WQ_UNBOUND if they attempt to use queue_work_near? In that
> 
> Hmm... yeah, let's just use queue_work_on() for now.  We can sort it
> out later and users could already do that anyway.
> 
> Thanks.

So are you saying I should just return an error for now if somebody 
tries to use something other than an unbound workqueue with 
queue_work_near, and expect everyone else to just use queue_work_on for 
the other workqueue types?

Thanks.

- Alex

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Duyck <alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
To: Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: len.brown-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org,
	linux-pm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	gregkh-hQyY1W1yCW8ekmWlsbkhG0B+6BGkLq7r@public.gmane.org,
	linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org,
	jiangshanlai-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	zwisler-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	pavel-+ZI9xUNit7I@public.gmane.org,
	rafael-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org
Subject: Re: [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide queue_work_near to queue work near a given NUMA node
Date: Tue, 2 Oct 2018 11:23:26 -0700	[thread overview]
Message-ID: <be9081de-f186-b265-934d-78cec2a8792f@linux.intel.com> (raw)
In-Reply-To: <20181002174116.GG270328-LpCCV3molIbIZ9tKgghJQw2O0Ztt9esIQQ4Iyu8u01E@public.gmane.org>

On 10/2/2018 10:41 AM, Tejun Heo wrote:
> Hello,
> 
> On Mon, Oct 01, 2018 at 02:54:39PM -0700, Alexander Duyck wrote:
>>> It might be better to leave queue_work_on() to be used for per-cpu
>>> workqueues and introduce queue_work_near() as you suggseted.  I just
>>> don't want it to duplicate the node selection code in it.  Would that
>>> work?
>>
>> So if I understand what you are saying correctly we default to
>> round-robin on a given node has no CPUs attached to it. I could
>> probably work with that if that is the default behavior instead of
>> adding much of the complexity I already have.
> 
> Yeah, it's all in wq_select_unbound_cpu().  Right now, if the
> requested cpu isn't in wq_unbound_cpumask, it falls back to dumb
> round-robin.  We can probably do better there and find the nearest
> node considering topology.

Well if we could get wq_select_unbound_cpu doing the right thing based 
on node topology that would be most of my work solved right there. 
Basically I could just pass WQ_CPU_UNBOUND with the correct node and it 
would take care of getting to the right CPU.

>> The question I have then is what should I do about workqueues that
>> aren't WQ_UNBOUND if they attempt to use queue_work_near? In that
> 
> Hmm... yeah, let's just use queue_work_on() for now.  We can sort it
> out later and users could already do that anyway.
> 
> Thanks.

So are you saying I should just return an error for now if somebody 
tries to use something other than an unbound workqueue with 
queue_work_near, and expect everyone else to just use queue_work_on for 
the other workqueue types?

Thanks.

- Alex

  reply	other threads:[~2018-10-02 18:28 UTC|newest]

Thread overview: 69+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-26 21:51 [RFC workqueue/driver-core PATCH 0/5] Add NUMA aware async_schedule calls Alexander Duyck
2018-09-26 21:51 ` Alexander Duyck
2018-09-26 21:51 ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 1/5] workqueue: Provide queue_work_near to queue work near a given NUMA node Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:53   ` Tejun Heo
2018-09-26 21:53     ` Tejun Heo
2018-09-26 21:53     ` Tejun Heo
2018-09-26 22:05     ` Alexander Duyck
2018-09-26 22:05       ` Alexander Duyck
2018-09-26 22:09       ` Tejun Heo
2018-09-26 22:09         ` Tejun Heo
2018-09-26 22:09         ` Tejun Heo
2018-09-26 22:19         ` Alexander Duyck
2018-09-26 22:19           ` Alexander Duyck
2018-10-01 16:01           ` Tejun Heo
2018-10-01 16:01             ` Tejun Heo
2018-10-01 16:01             ` Tejun Heo
2018-10-01 21:54             ` Alexander Duyck
2018-10-01 21:54               ` Alexander Duyck
2018-10-01 21:54               ` Alexander Duyck
2018-10-02 17:41               ` Tejun Heo
2018-10-02 17:41                 ` Tejun Heo
2018-10-02 17:41                 ` Tejun Heo
2018-10-02 18:23                 ` Alexander Duyck [this message]
2018-10-02 18:23                   ` Alexander Duyck
2018-10-02 18:23                   ` Alexander Duyck
2018-10-02 18:41                   ` Tejun Heo
2018-10-02 18:41                     ` Tejun Heo
2018-10-02 18:41                     ` Tejun Heo
2018-10-02 20:49                     ` Alexander Duyck
2018-10-02 20:49                       ` Alexander Duyck
2018-10-02 20:49                       ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 2/5] async: Add support for queueing on specific " Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-27  0:31   ` Dan Williams
2018-09-27  0:31     ` Dan Williams
2018-09-27  0:31     ` Dan Williams
2018-09-27 15:16     ` Alexander Duyck
2018-09-27 15:16       ` Alexander Duyck
2018-09-27 15:16       ` Alexander Duyck
2018-09-27 19:48       ` Dan Williams
2018-09-27 19:48         ` Dan Williams
2018-09-27 20:03         ` Alexander Duyck
2018-09-27 20:03           ` Alexander Duyck
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 3/5] driver core: Probe devices asynchronously instead of the driver Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-27  0:48   ` Dan Williams
2018-09-27  0:48     ` Dan Williams
2018-09-27  0:48     ` Dan Williams
2018-09-27 15:27     ` Alexander Duyck
2018-09-27 15:27       ` Alexander Duyck
2018-09-27 15:27       ` Alexander Duyck
2018-09-28  2:48       ` Dan Williams
2018-09-28  2:48         ` Dan Williams
2018-09-28  2:48         ` Dan Williams
2018-09-26 21:51 ` [RFC workqueue/driver-core PATCH 4/5] driver core: Use new async_schedule_dev command Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-26 21:51   ` Alexander Duyck
2018-09-28 17:42   ` Dan Williams
2018-09-28 17:42     ` Dan Williams
2018-09-28 17:42     ` Dan Williams
2018-09-26 21:52 ` [RFC workqueue/driver-core PATCH 5/5] nvdimm: Schedule device registration on node local to the device Alexander Duyck
2018-09-26 21:52   ` Alexander Duyck
2018-09-26 21:52   ` Alexander Duyck
2018-09-28 17:46   ` Dan Williams
2018-09-28 17:46     ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=be9081de-f186-b265-934d-78cec2a8792f@linux.intel.com \
    --to=alexander.h.duyck@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jiangshanlai@gmail.com \
    --cc=len.brown@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=pavel@ucw.cz \
    --cc=rafael@kernel.org \
    --cc=tj@kernel.org \
    --cc=zwisler@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.