All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: David Howells <dhowells@redhat.com>
Cc: linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 01/11] workqueue: Add a decrement-after-return and wake if 0 facility
Date: Wed, 6 Sep 2017 07:51:39 -0700	[thread overview]
Message-ID: <20170906145139.GO1774378@devbig577.frc2.facebook.com> (raw)
In-Reply-To: <27489.1504623016@warthog.procyon.org.uk>

Hello, David.

On Tue, Sep 05, 2017 at 03:50:16PM +0100, David Howells wrote:
> With one of my latest patches to AFS, there's a set of cell records, where
> each cell has a manager work item that mainains that cell, including
> refreshing DNS records and excising expired records from the list.  Performing
> the excision in the manager work item makes handling the fscache index cookie
> easier (you can't have two cookies attached to the same object), amongst other
> things.
> 
> There's also an overseer work item that maintains a single expiry timer for
> all the cells and queues the per-cell work items to do DNS updates and cell
> removal.
> 
> The reason that the overseer exists is that it makes it easier to do a put on
> a cell.  The cell decrements the cell refcount and then wants to schedule the
> cell for destruction - but it's no longer permitted to touch the cell.  I
> could use atomic_dec_and_lock(), but that's messy.  It's cleaner just to set
> the timer on the overseer and leave it to that.
> 
> However, if someone does rmmod, I have to be able to clean everything up.  The
> overseer timer may be queued or running; the overseer may be queued *and*
> running and may get queued again by the timer; and each cell's work item may
> be queued *and* running and may get queued again by the manager.

Thanks for the detailed explanation.

> > Why can't it be done via the usual "flush from exit"?
> 
> Well, it can, but you need a flush for each separate level of dependencies,
> where one dependency will kick off another level of dependency during the
> cleanup.
> 
> So what I think I would have to do is set a flag to say that no one is allowed
> to set the timer now (this shouldn't happen outside of server or volume cache
> clearance), delete the timer synchronously, flush the work queue four times
> and then do an RCU barrier.
> 
> However, since I have volumes with dependencies on servers and cells, possibly
> with their own managers, I think I may need up to 12 flushes, possibly with
> interspersed RCU barriers.

Would it be possible to isolate work items for the cell in its own
workqueue and use drain_workqueue()?  Separating out flush domains is
one of the main use cases for dedicated workqueues after all.

> It's much simpler to count out the objects than to try and get the flushing
> right.

I still feel very reluctant to add generic counting & trigger
mechanism to work items for this.  I think it's too generic a solution
for a very specific problem.

Thanks.

-- 
tejun

      reply	other threads:[~2017-09-06 14:51 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-01 15:40 [RFC PATCH 01/11] workqueue: Add a decrement-after-return and wake if 0 facility David Howells
2017-09-01 15:41 ` [RFC PATCH 02/11] refcount: Implement inc/decrement-and-return functions David Howells
2017-09-01 16:42   ` Peter Zijlstra
2017-09-01 21:15   ` David Howells
2017-09-01 21:50     ` Peter Zijlstra
2017-09-01 22:03       ` Peter Zijlstra
2017-09-01 22:51     ` David Howells
2017-09-04  7:30       ` Peter Zijlstra
2017-09-04 15:36   ` Christoph Hellwig
2017-09-04 16:08   ` David Howells
2017-09-05  6:45     ` Christoph Hellwig
2017-09-01 15:41 ` [RFC PATCH 03/11] Pass mode to wait_on_atomic_t() action funcs and provide default actions David Howells
2017-09-01 15:41 ` [RFC PATCH 04/11] Add a function to start/reduce a timer David Howells
2017-10-20 12:20   ` Thomas Gleixner
2017-11-09  0:33   ` David Howells
2017-09-01 15:41 ` [RFC PATCH 05/11] afs: Lay the groundwork for supporting network namespaces David Howells
2017-09-01 15:41 ` [RFC PATCH 06/11] afs: Add some protocol defs David Howells
2017-09-01 15:41 ` [RFC PATCH 07/11] afs: Update the cache index structure David Howells
2017-09-01 15:41 ` [RFC PATCH 08/11] afs: Keep and pass sockaddr_rxrpc addresses rather than in_addr David Howells
2017-09-01 15:41 ` [RFC PATCH 09/11] afs: Allow IPv6 address specification of VL servers David Howells
2017-09-01 15:42 ` [RFC PATCH 10/11] afs: Overhaul cell database management David Howells
2017-09-01 15:42 ` [RFC PATCH 11/11] afs: Retry rxrpc calls with address rotation on network error David Howells
2017-09-01 15:52 ` [RFC PATCH 00/11] AFS: Namespacing part 1 David Howells
2017-09-05 13:29 ` [RFC PATCH 01/11] workqueue: Add a decrement-after-return and wake if 0 facility Tejun Heo
2017-09-05 14:50 ` David Howells
2017-09-06 14:51   ` Tejun Heo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170906145139.GO1774378@devbig577.frc2.facebook.com \
    --to=tj@kernel.org \
    --cc=dhowells@redhat.com \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.