linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: jiangshanlai@gmail.com, linux-kernel@vger.kernel.org
Subject: Re: Is it really safe to use workqueues to drive expedited grace periods?
Date: Fri, 3 Mar 2017 11:30:49 -0800	[thread overview]
Message-ID: <20170303193049.GC22962@wtj.duckdns.org> (raw)
In-Reply-To: <20170214001600.GZ30506@linux.vnet.ibm.com>

Hello, Paul.

Sorry about the long delay.  Travel + sickness.  Just starting to
catch up with things now.

On Mon, Feb 13, 2017 at 04:16:00PM -0800, Paul E. McKenney wrote:
> Thank you for the information!  So if I am to continue using workqueues
> for expedited RCU grace periods, I believe that need to do the following:
> 
> 1.	Use alloc_workqueue() to create my own WQ_MEM_RECLAIM
> 	workqueue.

This is only necessary if RCU can be in a dependency chain in the
memory reclaim path - e.g. somebody doing synchronize_expedited_rcu()
in some obscure writeback path; however, given how wildly RCU is used,
I don't think this is a bad idea.  The only extra overhead which comes
from it is memory used for an extra workqueue and a rescuer thread
after all.

> 2.	Rework my workqueue handler to avoid blocking waiting for
> 	the expedited grace period to complete.  I should be able
> 	to do a small number of timed wait, but if I actually
> 	wait for the grace period to complete, I might end up
> 	hogging the reserved items.  (Or does my workqueue supply
> 	them for me?  If so, so much the better!)

So, what the dedicated workqueue w/ WQ_MEMRECLAIM guarantees is that
there will always be at least one worker thread which is executing
work items from the workqueue.

As long as your workqueue usage guarantees forward progress - that is,
as long as one work item in the workqueue won't block indefinitely on
another work item on the same workqueue, you shouldn't need to reword
the workqueue handler.

If there can be dependency chain among work items of the same
WQ_MEMRECLAIM workqueue, often the best approach is breaking up the
chain and putting them into their own WQ_MEMRECLAIM workqueues.

> 3.	Concurrency would not be a problem -- there can be no more
> 	four work elements in flight across both possible flavors
> 	of expedited grace periods.

You usually don't have to worry about concurrency all that much with
workqueues.  They'll provide the maximum the system can as long as
that's possible.

If the four work items can depend on each other, it'd be best to put
them in separate workqueues.  If not, there's nothing to worry about.

Thanks.

-- 
tejun

  reply	other threads:[~2017-03-03 19:30 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-10 21:21 Is it really safe to use workqueues to drive expedited grace periods? Paul E. McKenney
2017-02-11  2:35 ` Tejun Heo
2017-02-14  0:16   ` Paul E. McKenney
2017-03-03 19:30     ` Tejun Heo [this message]
2017-03-04  0:17       ` Paul E. McKenney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170303193049.GC22962@wtj.duckdns.org \
    --to=tj@kernel.org \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).