linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Kevin Hilman <khilman@linaro.org>
Cc: Tejun Heo <tj@kernel.org>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>,
	Zoran Markovic <zoran.markovic@linaro.org>,
	linux-kernel@vger.kernel.org,
	Shaibal Dutta <shaibal.dutta@broadcom.com>,
	Dipankar Sarma <dipankar@in.ibm.com>
Subject: Re: [RFC PATCH] rcu: move SRCU grace period work to power efficient workqueue
Date: Thu, 27 Feb 2014 16:08:46 +0100	[thread overview]
Message-ID: <20140227150843.GB19580@localhost.localdomain> (raw)
In-Reply-To: <7hk3cx46rw.fsf@paris.lan>

On Fri, Feb 14, 2014 at 03:24:35PM -0800, Kevin Hilman wrote:
> Tejun Heo <tj@kernel.org> writes:
> 
> > Hello,
> >
> > On Wed, Feb 12, 2014 at 11:02:41AM -0800, Paul E. McKenney wrote:
> >> +2.	Use the /sys/devices/virtual/workqueue/*/cpumask sysfs files
> >> +	to force the WQ_SYSFS workqueues to run on the specified set
> >> +	of CPUs.  The set of WQ_SYSFS workqueues can be displayed using
> >> +	"ls sys/devices/virtual/workqueue".
> >
> > One thing to be careful about is that once published, it becomes part
> > of userland visible interface.  Maybe adding some words warning
> > against sprinkling WQ_SYSFS willy-nilly is a good idea?
> 
> In the NO_HZ_FULL case, it seems to me we'd always want all unbound
> workqueues to have their affinity set to the housekeeping CPUs.
> 
> Is there any reason not to enable WQ_SYSFS whenever WQ_UNBOUND is set so
> the affinity can be controlled?  I guess the main reason would be that 
> all of these workqueue names would become permanent ABI.

Right. It's a legitimate worry but couldn't we consider workqueue names
just like kernel threads names? Ie: something that can be renamed or
disappear anytime from a kernel version to another?

Or sysfs has real strict rules about that and I'm just daydreaming?

I've been thinking we could also have a pseudo-workqueue directory in
/sys/devices/virtual/workqueue/unbounds with only cpumask as a file.

Writing to it would set all unbound workqueue affinity, at least those
that don't have WQ_SYSFS.

This would solve the ABI issue and keep a single consistent interface
for workqueue affinity.

> 
> At least for NO_HZ_FULL, maybe this should be automatic.  The cpumask of
> unbound workqueues should default to !tick_nohz_full_mask?  Any WQ_SYSFS
> workqueues could still be overridden from userspace, but at least the
> default would be sane, and help keep full dyntics CPUs isolated.
> 
> Example patch below, only boot tested on 4-CPU ARM system with
> CONFIG_NO_HZ_FULL_ALL=y and verified that 'cat
> /sys/devices/virtual/workqueue/writeback/cpumask' looked sane.  If this
> looks OK, I can maybe clean it up a bit and make it runtime check
> instead of a compile time check.

It can work too yeah. Maybe I prefer the idea of keeping a sysfs interface
for all workqueues (whether we use a pseudo "unbounds" dir or not) because then
the workqueue core stays unaware of dynticks details and it doesn't end up
fiddling with timers core internals like the full dynticks cpumask. The result
is more interdependency and possible headaches between timers and workqueue init
ordering.

And moreover people may forget to change WQ_SYSFS workqueues if all other UNBOUND
workqueues are known to be automatically handled. Or we handle WQ_SYSFS as well
along the way? But still WQ_SYSFS cpumask may be modified by other user programms
so it's still a round of set that must be done before doing any isolation work.

So I have mixed feelings between code complexity, simplicity for users, etc...

What do you guys think?

  parent reply	other threads:[~2014-02-27 15:08 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-31 19:53 [RFC PATCH] rcu: move SRCU grace period work to power efficient workqueue Zoran Markovic
2014-01-31 20:10 ` Zoran Markovic
2014-02-10 10:08 ` Lai Jiangshan
2014-02-10 18:47   ` Paul E. McKenney
2014-02-12 18:23     ` Frederic Weisbecker
2014-02-12 19:02       ` Paul E. McKenney
2014-02-12 19:23         ` Tejun Heo
2014-02-12 19:59           ` Paul E. McKenney
2014-02-12 20:13             ` Tejun Heo
2014-02-12 23:04             ` Frederic Weisbecker
2014-02-13  0:33               ` Paul E. McKenney
2014-02-13  1:30                 ` Lai Jiangshan
2014-02-13 14:05                 ` Frederic Weisbecker
2014-02-14 23:24           ` Kevin Hilman
2014-02-15  7:36             ` Mike Galbraith
2014-02-16 16:41               ` Paul E. McKenney
2014-02-17  4:50                 ` Mike Galbraith
2014-02-19  7:00                   ` Mike Galbraith
2014-02-24 17:55                   ` Kevin Hilman
2014-02-24 18:25                     ` Mike Galbraith
2014-02-27 15:08             ` Frederic Weisbecker [this message]
2014-03-10  9:52             ` Viresh Kumar
2014-02-17  5:26       ` Mike Galbraith
2014-02-27 14:43         ` Frederic Weisbecker
2014-02-27 15:22           ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140227150843.GB19580@localhost.localdomain \
    --to=fweisbec@gmail.com \
    --cc=dipankar@in.ibm.com \
    --cc=khilman@linaro.org \
    --cc=laijs@cn.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=shaibal.dutta@broadcom.com \
    --cc=tj@kernel.org \
    --cc=zoran.markovic@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).