All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Waiman Long <Waiman.Long@hpe.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>,
	Jan Kara <jack@suse.com>, Jeff Layton <jlayton@poochiereds.net>,
	"J. Bruce Fields" <bfields@fieldses.org>,
	Tejun Heo <tj@kernel.org>,
	Christoph Lameter <cl@linux-foundation.org>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Andi Kleen <andi@firstfloor.org>,
	Dave Chinner <dchinner@redhat.com>,
	Scott J Norton <scott.norton@hp.com>,
	Douglas Hatch <doug.hatch@hp.com>
Subject: Re: [RRC PATCH 2/2] vfs: Use per-cpu list for superblock's inode list
Date: Wed, 17 Feb 2016 21:37:39 +1100	[thread overview]
Message-ID: <20160217103739.GP14668@dastard> (raw)
In-Reply-To: <1455672680-7153-3-git-send-email-Waiman.Long@hpe.com>

On Tue, Feb 16, 2016 at 08:31:20PM -0500, Waiman Long wrote:
> When many threads are trying to add or delete inode to or from
> a superblock's s_inodes list, spinlock contention on the list can
> become a performance bottleneck.
> 
> This patch changes the s_inodes field to become a per-cpu list with
> per-cpu spinlocks.
> 
> With an exit microbenchmark that creates a large number of threads,
> attachs many inodes to them and then exits. The runtimes of that
> microbenchmark with 1000 threads before and after the patch on a
> 4-socket Intel E7-4820 v3 system (40 cores, 80 threads) were as
> follows:
> 
>   Kernel            Elapsed Time    System Time
>   ------            ------------    -----------
>   Vanilla 4.5-rc4      65.29s         82m14s
>   Patched 4.5-rc4      22.81s         23m03s

Pretty good :)

My fsmark tests usually show up a fair bit of contention - moving
250k inodes through the cache every second over 16p does generate a
bit of load on the list. The patch makes the inode list add/del
operations disappear completely from the perf profiles, and there's
a marginal decrease in runtime (~4m40s vs 4m30s). I think the global
lock is right on the edge of breakdown under this load, though, so
if I was testing on a larger system I think the difference would be
much bigger.

I'll run some more testing on it, see if anything breaks.

A few comments on the code follow.

> @@ -1866,8 +1866,8 @@ void iterate_bdevs(void (*func)(struct block_device *, void *), void *arg)
>  {
>  	struct inode *inode, *old_inode = NULL;
>  
> -	spin_lock(&blockdev_superblock->s_inode_list_lock);
> -	list_for_each_entry(inode, &blockdev_superblock->s_inodes, i_sb_list) {
> +	for_all_percpu_list_entries_simple(inode, percpu_lock,
> +			blockdev_superblock->s_inodes_cpu, i_sb_list) {

This is kind what I meant about names getting way too long. How
about something like:

#define walk_sb_inodes(inode, sb, pcpu_lock)	\
	for_all_percpu_list_entries_simple(inode, pcpu_lock,	\
					   sb->s_inodes_list, i_sb_list)

#define walk_sb_inodes_end(pcpu_lock) end_all_percpu_list_entries(pcpu_lock)

for brevity?

> @@ -189,7 +190,7 @@ void fsnotify_unmount_inodes(struct super_block *sb)
>  		spin_unlock(&inode->i_lock);
>  
>  		/* In case the dropping of a reference would nuke next_i. */
> -		while (&next_i->i_sb_list != &sb->s_inodes) {
> +		while (&next_i->i_sb_list.list != percpu_head) {
>  			spin_lock(&next_i->i_lock);
>  			if (!(next_i->i_state & (I_FREEING | I_WILL_FREE)) &&
>  						atomic_read(&next_i->i_count)) {
> @@ -199,16 +200,16 @@ void fsnotify_unmount_inodes(struct super_block *sb)
>  				break;
>  			}
>  			spin_unlock(&next_i->i_lock);
> -			next_i = list_next_entry(next_i, i_sb_list);
> +			next_i = list_next_entry(next_i, i_sb_list.list);

pcpu_list_next_entry(next_i, i_sb_list)?

> @@ -1397,9 +1398,8 @@ struct super_block {
>  	 */
>  	int s_stack_depth;
>  
> -	/* s_inode_list_lock protects s_inodes */
> -	spinlock_t		s_inode_list_lock ____cacheline_aligned_in_smp;
> -	struct list_head	s_inodes;	/* all inodes */
> +	/* The percpu locks protect s_inodes_cpu */
> +	PERCPU_LIST_HEAD(s_inodes_cpu);	/* all inodes */

There is no need to encode the type of list into the name.
i.e. drop the "_cpu" suffix - we can see it's a percpu list from the
declaration.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2016-02-17 10:37 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-17  1:31 [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Waiman Long
2016-02-17  1:31 ` [RFC PATCH 1/2] lib/percpu-list: Per-cpu list with associated per-cpu locks Waiman Long
2016-02-17  9:53   ` Dave Chinner
2016-02-17 11:00     ` Peter Zijlstra
2016-02-17 11:05       ` Peter Zijlstra
2016-02-17 16:16         ` Waiman Long
2016-02-17 16:22           ` Peter Zijlstra
2016-02-17 16:27           ` Christoph Lameter
2016-02-17 17:12             ` Waiman Long
2016-02-17 17:18               ` Peter Zijlstra
2016-02-17 17:41                 ` Waiman Long
2016-02-17 18:22                   ` Peter Zijlstra
2016-02-17 18:45                     ` Waiman Long
2016-02-17 19:39                       ` Peter Zijlstra
2016-02-17 11:10       ` Dave Chinner
2016-02-17 11:26         ` Peter Zijlstra
2016-02-17 11:36           ` Peter Zijlstra
2016-02-17 15:56     ` Waiman Long
2016-02-17 16:02       ` Peter Zijlstra
2016-02-17 15:13   ` Christoph Lameter
2016-02-17  1:31 ` [RRC PATCH 2/2] vfs: Use per-cpu list for superblock's inode list Waiman Long
2016-02-17  7:16   ` Ingo Molnar
2016-02-17 15:40     ` Waiman Long
2016-02-17 10:37   ` Dave Chinner [this message]
2016-02-17 16:08     ` Waiman Long
2016-02-18 23:58 ` [RFC PATCH 0/2] vfs: Use per-cpu list for SB's s_inodes list Dave Chinner
2016-02-19 21:04   ` Long, Wai Man

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160217103739.GP14668@dastard \
    --to=david@fromorbit.com \
    --cc=Waiman.Long@hpe.com \
    --cc=andi@firstfloor.org \
    --cc=bfields@fieldses.org \
    --cc=cl@linux-foundation.org \
    --cc=dchinner@redhat.com \
    --cc=doug.hatch@hp.com \
    --cc=jack@suse.com \
    --cc=jlayton@poochiereds.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=scott.norton@hp.com \
    --cc=tj@kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.