All of lore.kernel.org
 help / color / mirror / Atom feed
From: Peter Zijlstra <peterz@infradead.org>
To: Tejun Heo <tj@kernel.org>
Cc: linux-kernel@vger.kernel.org, torvalds@linux-foundation.org,
	tglx@linutronix.de, linux-pm@vger.kernel.org
Subject: Re: [PATCHSET] workqueue: reimplement CPU hotplug to keep idle workers
Date: Fri, 20 Jul 2012 19:21:17 +0200	[thread overview]
Message-ID: <1342804877.2583.42.camel@twins> (raw)
In-Reply-To: <20120720170255.GE32763@google.com>

On Fri, 2012-07-20 at 10:02 -0700, Tejun Heo wrote:
> Hey, Peter.
> 
> On Fri, Jul 20, 2012 at 05:48:31PM +0200, Peter Zijlstra wrote:
> > On Tue, 2012-07-17 at 10:12 -0700, Tejun Heo wrote:
> > > While this makes rebinding somewhat more complicated, as it has to be
> > > able to rebind idle workers too, it allows overall hotplug path to be
> > > much simpler.  
> > 
> > I really don't see the point of re-binding.. at that point you've well
> > and proper violated any per-cpu expectation, so why not complete running
> > the works on the disassociated thing and let new works accrue on the
> > per-cpu things again?
> 
> We've discussed this a couple times now, so the existing reasons were,
> 
> * Local affinity is more often used as a form of affinity optimization
>   since the beginning.  This, mixed with queue_work() /
>   queue_work_on(), does make things muddy.
> 
> * With local affinity used for optimization, we better support
>   detaching running workers - before cmwq, this used to be one of the
>   sources of trouble during power state changes.
> 
> * So, we have unbound workers which started as bound while a CPU is
>   down.  When the CPU comes back up again, we can do one of the
>   followings - 1. migrate the unbound ones to WORK_CPU_UNBOUND (can
>   also do this on CPU_DOWN), 2. leave them unbound and keep them
>   running in parallel with bound ones, or 3. rebind them.  #2 is the
>   hariest - it contaminates the usual !hotplug code paths.  #1 or #3,
>   unsure, but given how global_cwq's don't usually interact with each
>   other, I thought #3 would be lower impact on hot paths.
> 
> So, the above was my rationale before this "we need to stop destroying
> and re-creating kthreads across CPU hotplug events because phones do
> it gazillion times".  Now, I don't think we have any other way.

OK, so why can't you splice the list of works from the CPU going down
onto the list of the CPU doing the down and convert any busy worker
threads to be bound to the cpu doing down?

That way there's nothing 'left' to get back to on up.

  reply	other threads:[~2012-07-20 17:21 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-17 17:12 [PATCHSET] workqueue: reimplement CPU hotplug to keep idle workers Tejun Heo
2012-07-17 17:12 ` [PATCH 1/9] workqueue: perform cpu down operations from low priority cpu_notifier() Tejun Heo
2012-07-20 21:52   ` Paul E. McKenney
2012-07-20 21:58     ` Tejun Heo
2012-07-21 21:36       ` Paul E. McKenney
2012-07-22 16:43         ` [PATCH] workqueue: fix spurious CPU locality WARN from process_one_work() Tejun Heo
2012-07-22 21:23           ` Paul E. McKenney
2012-07-17 17:12 ` [PATCH 2/9] workqueue: drop CPU_DYING notifier operation Tejun Heo
2012-07-17 17:12 ` [PATCH 3/9] workqueue: ROGUE workers are UNBOUND workers Tejun Heo
2012-07-17 17:12 ` [PATCH 4/9] workqueue: use mutex for global_cwq manager exclusion Tejun Heo
2012-07-17 17:12 ` [PATCH 5/9] workqueue: drop @bind from create_worker() Tejun Heo
2012-07-17 17:12 ` [PATCH 6/9] workqueue: reimplement CPU online rebinding to handle idle workers Tejun Heo
2012-07-17 17:12 ` [PATCH 7/9] workqueue: don't butcher idle workers on an offline CPU Tejun Heo
2012-07-17 17:12 ` [PATCH 8/9] workqueue: remove CPU offline trustee Tejun Heo
2012-07-17 17:12 ` [PATCH 9/9] workqueue: simplify CPU hotplug code Tejun Heo
2012-07-17 18:43 ` [PATCHSET] workqueue: reimplement CPU hotplug to keep idle workers Rafael J. Wysocki
2012-07-17 19:40   ` Tejun Heo
2012-07-20 15:48 ` Peter Zijlstra
2012-07-20 17:02   ` Tejun Heo
2012-07-20 17:21     ` Peter Zijlstra [this message]
2012-07-20 17:50       ` Tejun Heo
2012-07-20 18:22         ` Peter Zijlstra
2012-07-20 18:34           ` Tejun Heo
2012-07-20 19:44             ` Rafael J. Wysocki
2012-07-20 19:41               ` Tejun Heo
2012-07-21  6:42               ` Shilimkar, Santosh
2012-07-23  8:38               ` Peter De Schrijver
2012-07-20 16:39 ` Peter Zijlstra
2012-07-20 16:52   ` Tejun Heo
2012-07-20 17:01     ` Peter Zijlstra
2012-07-20 17:08       ` Tejun Heo
2012-07-20 17:19         ` Peter Zijlstra
2012-07-20 17:43           ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1342804877.2583.42.camel@twins \
    --to=peterz@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.