linux-xfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Darrick J. Wong" <djwong@kernel.org>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org, hch@infradead.org
Subject: Re: [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification
Date: Wed, 4 Aug 2021 09:19:18 -0700	[thread overview]
Message-ID: <20210804161918.GU3601443@magnolia> (raw)
In-Reply-To: <20210804115225.GP2757197@dread.disaster.area>

On Wed, Aug 04, 2021 at 09:52:25PM +1000, Dave Chinner wrote:
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> So we don't leave queued inodes on a CPU we won't ever flush.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---
>  fs/xfs/xfs_icache.c | 36 ++++++++++++++++++++++++++++++++++++
>  fs/xfs/xfs_icache.h |  1 +
>  fs/xfs/xfs_super.c  |  2 +-
>  3 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
> index f772f2a67a8b..9e2c95903c68 100644
> --- a/fs/xfs/xfs_icache.c
> +++ b/fs/xfs/xfs_icache.c
> @@ -1966,6 +1966,42 @@ xfs_inodegc_start(
>  	}
>  }
>  
> +/*
> + * Fold the dead CPU inodegc queue into the current CPUs queue.
> + */
> +void
> +xfs_inodegc_cpu_dead(
> +	struct xfs_mount	*mp,
> +	int			dead_cpu)

unsigned int, since that's the caller's type.

> +{
> +	struct xfs_inodegc	*dead_gc, *gc;
> +	struct llist_node	*first, *last;
> +	int			count = 0;
> +
> +	dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu);
> +	cancel_work_sync(&dead_gc->work);
> +
> +	if (llist_empty(&dead_gc->list))
> +		return;
> +
> +	first = dead_gc->list.first;
> +	last = first;
> +	while (last->next) {
> +		last = last->next;
> +		count++;
> +	}
> +	dead_gc->list.first = NULL;
> +	dead_gc->items = 0;
> +
> +	/* Add pending work to current CPU */
> +	gc = get_cpu_ptr(mp->m_inodegc);
> +	llist_add_batch(first, last, &gc->list);
> +	count += READ_ONCE(gc->items);
> +	WRITE_ONCE(gc->items, count);

I was wondering about the READ/WRITE_ONCE pattern for gc->items: it's
meant to be an accurate count of the list items, right?  But there's no
hard synchronization (e.g. spinlock) around them, which means that the
only CPU that can access that variable at all is the one that the percpu
structure belongs to, right?  And I think that's ok here, because the
only accessors are _queue() and _worker(), which both are supposed to
run on the same CPU since they're percpu lists, right?

In which case: why can't we just say count = dead_gc->items;?  @dead_cpu
is being offlined, which implies that nothing will get scheduled on it,
right?

> +	put_cpu_ptr(gc);
> +	queue_work(mp->m_inodegc_wq, &gc->work);

Should this be thresholded like we do for _inodegc_queue?

In the old days I would have imagined that cpu offlining should be rare
enough <cough> that it probably doesn't make any real difference.  OTOH
my cloudic colleague reminds me that they aggressively offline cpus to
reduce licensing cost(!).

--D

> +}
> +
>  #ifdef CONFIG_XFS_RT
>  static inline bool
>  xfs_inodegc_want_queue_rt_file(
> diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h
> index bdf2a8d3fdd5..853d5bfc0cfb 100644
> --- a/fs/xfs/xfs_icache.h
> +++ b/fs/xfs/xfs_icache.h
> @@ -79,5 +79,6 @@ void xfs_inodegc_worker(struct work_struct *work);
>  void xfs_inodegc_flush(struct xfs_mount *mp);
>  void xfs_inodegc_stop(struct xfs_mount *mp);
>  void xfs_inodegc_start(struct xfs_mount *mp);
> +void xfs_inodegc_cpu_dead(struct xfs_mount *mp, int cpu);
>  
>  #endif
> diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
> index c251679e8514..f579ec49eb7a 100644
> --- a/fs/xfs/xfs_super.c
> +++ b/fs/xfs/xfs_super.c
> @@ -2187,7 +2187,7 @@ xfs_cpu_dead(
>  	spin_lock(&xfs_mount_list_lock);
>  	list_for_each_entry_safe(mp, n, &xfs_mount_list, m_mount_list) {
>  		spin_unlock(&xfs_mount_list_lock);
> -		/* xfs_subsys_dead(mp, cpu); */
> +		xfs_inodegc_cpu_dead(mp, cpu);
>  		spin_lock(&xfs_mount_list_lock);
>  	}
>  	spin_unlock(&xfs_mount_list_lock);

  reply	other threads:[~2021-08-04 16:19 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 18:43 [PATCHSET v8 00/20] xfs: deferred inode inactivation Darrick J. Wong
2021-07-29 18:43 ` [PATCH 01/20] xfs: move xfs_inactive call to xfs_inode_mark_reclaimable Darrick J. Wong
2021-07-29 18:44 ` [PATCH 02/20] xfs: detach dquots from inode if we don't need to inactivate it Darrick J. Wong
2021-07-29 18:44 ` [PATCH 03/20] xfs: defer inode inactivation to a workqueue Darrick J. Wong
2021-07-30  4:24   ` Dave Chinner
2021-07-31  4:21     ` Darrick J. Wong
2021-08-01 21:49       ` Dave Chinner
2021-08-01 23:47         ` Dave Chinner
2021-08-03  8:34   ` [PATCH, alternative] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-03 20:20     ` Darrick J. Wong
2021-08-04  3:20     ` [PATCH, alternative v2] " Darrick J. Wong
2021-08-04 10:03       ` [PATCH] xfs: inodegc needs to stop before freeze Dave Chinner
2021-08-04 12:37         ` Dave Chinner
2021-08-04 10:46       ` [PATCH] xfs: don't run inodegc flushes when inodegc is not active Dave Chinner
2021-08-04 16:20         ` Darrick J. Wong
2021-08-04 11:09       ` [PATCH, alternative v2] xfs: per-cpu deferred inode inactivation queues Dave Chinner
2021-08-04 15:59         ` Darrick J. Wong
2021-08-04 21:35           ` Dave Chinner
2021-08-04 11:49       ` [PATCH, pre-03/20 #1] xfs: introduce CPU hotplug infrastructure Dave Chinner
2021-08-04 11:50       ` [PATCH, pre-03/20 #2] xfs: introduce all-mounts list for cpu hotplug notifications Dave Chinner
2021-08-04 16:06         ` Darrick J. Wong
2021-08-04 21:17           ` Dave Chinner
2021-08-04 11:52       ` [PATCH, post-03/20 1/1] xfs: hook up inodegc to CPU dead notification Dave Chinner
2021-08-04 16:19         ` Darrick J. Wong [this message]
2021-08-04 21:48           ` Dave Chinner
2021-07-29 18:44 ` [PATCH 04/20] xfs: throttle inode inactivation queuing on memory reclaim Darrick J. Wong
2021-07-29 18:44 ` [PATCH 05/20] xfs: don't throttle memory reclaim trying to queue inactive inodes Darrick J. Wong
2021-07-29 18:44 ` [PATCH 06/20] xfs: throttle inodegc queuing on backlog Darrick J. Wong
2021-08-02  0:45   ` Dave Chinner
2021-08-02  1:30     ` Dave Chinner
2021-07-29 18:44 ` [PATCH 07/20] xfs: queue inodegc worker immediately when memory is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 08/20] xfs: expose sysfs knob to control inode inactivation delay Darrick J. Wong
2021-07-29 18:44 ` [PATCH 09/20] xfs: reduce inactivation delay when free space is tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 10/20] xfs: reduce inactivation delay when quota are tight Darrick J. Wong
2021-07-29 18:44 ` [PATCH 11/20] xfs: reduce inactivation delay when realtime extents " Darrick J. Wong
2021-07-29 18:44 ` [PATCH 12/20] xfs: inactivate inodes any time we try to free speculative preallocations Darrick J. Wong
2021-07-29 18:45 ` [PATCH 13/20] xfs: flush inode inactivation work when compiling usage statistics Darrick J. Wong
2021-07-29 18:45 ` [PATCH 14/20] xfs: parallelize inode inactivation Darrick J. Wong
2021-08-02  0:55   ` Dave Chinner
2021-08-02 21:33     ` Darrick J. Wong
2021-07-29 18:45 ` [PATCH 15/20] xfs: reduce inactivation delay when AG free space are tight Darrick J. Wong
2021-07-29 18:45 ` [PATCH 16/20] xfs: queue inodegc worker immediately on backlog Darrick J. Wong
2021-07-29 18:45 ` [PATCH 17/20] xfs: don't run speculative preallocation gc when fs is frozen Darrick J. Wong
2021-07-29 18:45 ` [PATCH 18/20] xfs: scale speculative preallocation gc delay based on free space Darrick J. Wong
2021-07-29 18:45 ` [PATCH 19/20] xfs: use background worker pool when transactions can't get " Darrick J. Wong
2021-07-29 18:45 ` [PATCH 20/20] xfs: avoid buffer deadlocks when walking fs inodes Darrick J. Wong
2021-08-02 10:35 ` [PATCHSET v8 00/20] xfs: deferred inode inactivation Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210804161918.GU3601443@magnolia \
    --to=djwong@kernel.org \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).