All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>, linux-xfs@vger.kernel.org
Subject: Re: [PATCH 8/8] xfs: cache unlinked pointers in an rhashtable
Date: Sat, 2 Feb 2019 10:59:00 +1100	[thread overview]
Message-ID: <20190201235900.GX6173@dastard> (raw)
In-Reply-To: <20190201080343.GD22295@infradead.org>

On Fri, Feb 01, 2019 at 12:03:43AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 31, 2019 at 03:18:40PM -0800, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > Use a rhashtable to cache the unlinked list incore.  This should speed
> > up unlinked processing considerably when there are a lot of inodes on
> > the unlinked list because iunlink_remove no longer has to traverse an
> > entire bucket list to find which inode points to the one being removed.
> 
> This sounds pretty reasonable and a real review will follow.  But can
> you quantify the considerably speedups for real life workloads?

When you have more than a few hundred open-but-unlinked inodes in an
AG, the unlinked list walking starts to take a significant amount of
time when the final reference to an inode goes away. It's
essentially an O(N) buffer walk, so takes a lot of CPU time when a
list gets more than a few inode long.

With darrick's background inactivation, the lists grow to tens of
thousands of inodes very quickly. I was seeing >95% of CPU time
being spend in unlinked list walking on large scale 'rm -rf'
workloads and performance absolutely dropped off a cliff. My initial
code (from years and years ago) used a
double linked list, and when I forward ported that and ran it up,
the CPU overhead of unlink list removal was cut to almost nothing and
all the performance regressions with background inactivation went
away. i.e. Unlink list removal went from an O(N) overhead to always
being O(1), so the length of the list didnt' affect performance any
more.

Darrick's code replaces the list_head in each inode I used with a
rhashtable - it removes the 16 bytes per-inode memory footprint my
implementation required, but otherwise is identical.  Hence I expect
that it'll show exactly the same performance characteristics as the
existing code.

IOWs, it's really required for backgrounding and parallelising inode
inactivation, but otherwise it won't make any noticable/measurable
difference to most workloads because the unlinked lists almost never
grows more than one inode in length and so O(N) ~= O(1) almost all
the time...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2019-02-01 23:59 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-31 23:17 [PATCH 0/8] xfs: incore unlinked list Darrick J. Wong
2019-01-31 23:17 ` [PATCH 1/8] xfs: clean up iunlink functions Darrick J. Wong
2019-02-01  8:01   ` Christoph Hellwig
2019-02-02 19:15     ` Darrick J. Wong
2019-01-31 23:18 ` [PATCH 2/8] xfs: track unlinked inode counts in per-ag data Darrick J. Wong
2019-02-01 18:59   ` Brian Foster
2019-02-01 19:33     ` Darrick J. Wong
2019-02-02 16:14       ` Christoph Hellwig
2019-02-02 19:28         ` Darrick J. Wong
2019-01-31 23:18 ` [PATCH 3/8] xfs: refactor AGI unlinked bucket updates Darrick J. Wong
2019-02-01 19:00   ` Brian Foster
2019-02-02 19:50     ` Darrick J. Wong
2019-02-02 16:21   ` Christoph Hellwig
2019-02-02 19:51     ` Darrick J. Wong
2019-01-31 23:18 ` [PATCH 4/8] xfs: strengthen AGI unlinked inode bucket pointer checks Darrick J. Wong
2019-02-01 19:00   ` Brian Foster
2019-02-02 16:22   ` Christoph Hellwig
2019-01-31 23:18 ` [PATCH 5/8] xfs: refactor inode unlinked pointer update functions Darrick J. Wong
2019-02-01 19:01   ` Brian Foster
2019-02-02 22:00     ` Darrick J. Wong
2019-02-02 16:27   ` Christoph Hellwig
2019-02-02 20:29     ` Darrick J. Wong
2019-01-31 23:18 ` [PATCH 6/8] xfs: hoist unlinked list search and mapping to a separate function Darrick J. Wong
2019-02-01 19:01   ` Brian Foster
2019-02-02 20:46     ` Darrick J. Wong
2019-02-04 13:18       ` Brian Foster
2019-02-04 16:31         ` Darrick J. Wong
2019-02-02 16:30   ` Christoph Hellwig
2019-02-02 20:42     ` Darrick J. Wong
2019-02-02 16:51   ` Christoph Hellwig
2019-01-31 23:18 ` [PATCH 7/8] xfs: add tracepoints for high level iunlink operations Darrick J. Wong
2019-02-01 19:01   ` Brian Foster
2019-02-01 19:14     ` Darrick J. Wong
2019-01-31 23:18 ` [PATCH 8/8] xfs: cache unlinked pointers in an rhashtable Darrick J. Wong
2019-02-01  8:03   ` Christoph Hellwig
2019-02-01 23:59     ` Dave Chinner [this message]
2019-02-02  4:31       ` Darrick J. Wong
2019-02-02 16:07         ` Christoph Hellwig
2019-02-01 19:29   ` Brian Foster
2019-02-01 19:40     ` Darrick J. Wong
2019-02-02 17:01   ` Christoph Hellwig
2019-02-01  7:57 ` [PATCH 0/8] xfs: incore unlinked list Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190201235900.GX6173@dastard \
    --to=david@fromorbit.com \
    --cc=darrick.wong@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.