linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "Darrick J. Wong" <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification
Date: Thu, 5 Aug 2021 07:48:59 +1000	[thread overview]
Message-ID: <20210804214859.GT2757197@dread.disaster.area> (raw)
In-Reply-To: <20210804161918.GU3601443@magnolia>

On Wed, Aug 04, 2021 at 09:19:18AM -0700, Darrick J. Wong wrote:
> On Wed, Aug 04, 2021 at 09:52:25PM +1000, Dave Chinner wrote:
> > 
> > From: Dave Chinner <dchinner@redhat.com>
> > 
> > So we don't leave queued inodes on a CPU we won't ever flush.
> > 
> > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> > ---
> >  fs/xfs/xfs_icache.c | 36 ++++++++++++++++++++++++++++++++++++
> >  fs/xfs/xfs_icache.h |  1 +
> >  fs/xfs/xfs_super.c  |  2 +-
> >  3 files changed, 38 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> > index f772f2a67a8b..9e2c95903c68 100644
> > --- a/fs/xfs/xfs_icache.c
> > +++ b/fs/xfs/xfs_icache.c
> > @@ -1966,6 +1966,42 @@ xfs_inodegc_start(
> >  	}
> >  }
> >  
> > +/*
> > + * Fold the dead CPU inodegc queue into the current CPUs queue.
> > + */
> > +void
> > +xfs_inodegc_cpu_dead(
> > +	struct xfs_mount	*mp,
> > +	int			dead_cpu)
> 
> unsigned int, since that's the caller's type.

*nod*

> > +{
> > +	struct xfs_inodegc	*dead_gc, *gc;
> > +	struct llist_node	*first, *last;
> > +	int			count = 0;
> > +
> > +	dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);
> > +	cancel_work_sync(&dead_gc->work);
> > +
> > +	if (llist_empty(&dead_gc->list))
> > +		return;
> > +
> > +	first = dead_gc->list.first;
> > +	last = first;
> > +	while (last->next) {
> > +		last = last->next;
> > +		count++;
> > +	}
> > +	dead_gc->list.first = NULL;
> > +	dead_gc->items = 0;
> > +
> > +	/* Add pending work to current CPU */
> > +	gc = get_cpu_ptr(mp->m_inodegc);
> > +	llist_add_batch(first, last, &gc->list);
> > +	count += READ_ONCE(gc->items);
> > +	WRITE_ONCE(gc->items, count);
> 
> I was wondering about the READ/WRITE_ONCE pattern for gc->items: it's
> meant to be an accurate count of the list items, right?  But there's no
> hard synchronization (e.g. spinlock) around them, which means that the
> only CPU that can access that variable at all is the one that the percpu
> structure belongs to, right?  And I think that's ok here, because the
> only accessors are _queue() and _worker(), which both are supposed to
> run on the same CPU since they're percpu lists, right?

For items that are per-cpu, we only need to guarantee that the
normal case is access by that CPU only and that dependent accesses
within an algorithm occur within a preempt disabled region. THe use
of get_cpu_ptr()/put_cpu_ptr() creates a critital region where
preemption is disabled on that CPU. Hence we can read, modify and
write a per-cpu variable without locking, knowing that nothing else
will be attempting to run the same modification at the same time on
a different CPU, because they will be accessing percpu stuff local
to that CPU, not this one.

The reason for using READ_ONCE/WRITE_ONCE is largely to ensure that
we fetch and store the variable appropriately, as the work that
zeros the count can sometimes run on a different CPU. And it will
shut up the data race detector thing...

As it is, the count of items is rough, and doesn't need to be
accurate. If we race with a zeroing of the count, we'll set the
count to be higher (as if the zeroing didn't occur) and that just
causes the work to be rescheduled sooner than it otherwise would. A
race with zeroing is not the end of the world...

> In which case: why can't we just say count = dead_gc->items;?  @dead_cpu
> is being offlined, which implies that nothing will get scheduled on it,
> right?

The local CPU might already have items queued, so the count should
include them, too.

> > +	put_cpu_ptr(gc);
> > +	queue_work(mp->m_inodegc_wq, &gc->work);
> 
> Should this be thresholded like we do for _inodegc_queue?

I thought about that, then thought "this is slow path stuff, we just
want to clear out the backlog so we don't care about batching.."

> In the old days I would have imagined that cpu offlining should be rare
> enough <cough> that it probably doesn't make any real difference.  OTOH
> my cloudic colleague reminds me that they aggressively offline cpus to
> reduce licensing cost(!).

Yeah, CPU hotplug is very rare, except in those rare environments
where it is very common....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2021-08-04 21:49 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 18:43 [PATCHSET v8 00/20] xfs: deferred inode inactivation Darrick J. Wong
2021-07-29 18:43 ` [PATCH 01/20] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-07-29 18:44 ` [PATCH 02/20] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-07-29 18:44 ` [PATCH 03/20] xfs: defer inode inactivation to a workqueue Darrick J. Wong
2021-07-30  4:24   ` Dave Chinner
2021-07-31  4:21     ` Darrick J. Wong
2021-08-01 21:49       ` Dave Chinner
2021-08-01 23:47         ` Dave Chinner
2021-08-03  8:34   ` [PATCH, alternative] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-03 20:20     ` Darrick J. Wong
2021-08-04  3:20     ` [PATCH, alternative v2] " Darrick J. Wong
2021-08-04 10:03       ` [PATCH] xfs: inodegc needs to stop before freeze Dave Chinner
2021-08-04 12:37         ` Dave Chinner
2021-08-04 10:46       ` [PATCH] xfs: don't run inodegc flushes when inodegc is not active Dave Chinner
2021-08-04 16:20         ` Darrick J. Wong
2021-08-04 11:09       ` [PATCH, alternative v2] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-04 15:59         ` Darrick J. Wong
2021-08-04 21:35           ` Dave Chinner
2021-08-04 11:49       ` [PATCH, pre-03/20 #1] xfs: introduce CPU hotplug infrastructure Dave Chinner
2021-08-04 11:50       ` [PATCH, pre-03/20 #2] xfs: introduce all-mounts list for cpu hotplug notifications Dave Chinner
2021-08-04 16:06         ` Darrick J. Wong
2021-08-04 21:17           ` Dave Chinner
2021-08-04 11:52       ` [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification Dave Chinner
2021-08-04 16:19         ` Darrick J. Wong
2021-08-04 21:48           ` Dave Chinner [this message]
2021-07-29 18:44 ` [PATCH 04/20] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-07-29 18:44 ` [PATCH 05/20] xfs: don't throttle memory reclaim trying to queue inactive inodes Darrick J. Wong
2021-07-29 18:44 ` [PATCH 06/20] xfs: throttle inodegc queuing on backlog Darrick J. Wong
2021-08-02  0:45   ` Dave Chinner
2021-08-02  1:30     ` Dave Chinner
2021-07-29 18:44 ` [PATCH 07/20] xfs: queue inodegc worker immediately when memory is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 08/20] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong
2021-07-29 18:44 ` [PATCH 09/20] xfs: reduce inactivation delay when free space is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 10/20] xfs: reduce inactivation delay when quota are tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 11/20] xfs: reduce inactivation delay when realtime extents " Darrick J. Wong
2021-07-29 18:44 ` [PATCH 12/20] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-07-29 18:45 ` [PATCH 13/20] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-07-29 18:45 ` [PATCH 14/20] xfs: parallelize inode inactivation Darrick J. Wong
2021-08-02  0:55   ` Dave Chinner
2021-08-02 21:33     ` Darrick J. Wong
2021-07-29 18:45 ` [PATCH 15/20] xfs: reduce inactivation delay when AG free space are tight Darrick J. Wong
2021-07-29 18:45 ` [PATCH 16/20] xfs: queue inodegc worker immediately on backlog Darrick J. Wong
2021-07-29 18:45 ` [PATCH 17/20] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-07-29 18:45 ` [PATCH 18/20] xfs: scale speculative preallocation gc delay based on free space Darrick J. Wong
2021-07-29 18:45 ` [PATCH 19/20] xfs: use background worker pool when transactions can't get " Darrick J. Wong
2021-07-29 18:45 ` [PATCH 20/20] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-02 10:35 ` [PATCHSET v8 00/20] xfs: deferred inode inactivation Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210804214859.GT2757197@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).