All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org
Subject: Re: [PATCH 3/7] cache: prevent expansion races
Date: Sat, 3 Nov 2018 10:26:13 +1100	[thread overview]
Message-ID: <20181102232613.GG19305@dastard> (raw)
In-Reply-To: <20181102113138.GA6794@bfoster>

On Fri, Nov 02, 2018 at 07:31:38AM -0400, Brian Foster wrote:
> Fair enough, but I'm still curious if doing something like changing the
> hash trylock in the shaker to a blocking lock would improve shaker
> effectiveness enough to avoid the need for the time-based hackery. It's
> possible it has no effect or just replaces one problem with a set of
> others, but it's an even more trivial change than this patch is.
> 
> Another approach may be to lift the cache shaker from a per-lookup-miss
> execution context to something more central (like its own thread(s))
> such that lookups can block on (bounded) shakes without introducing too
> much concurrency therein. That is certainly more involved than bolting a
> throttle onto cache expansion and may not be worth the effort if the
> long term plan is to replace the whole cache mechanism.

I'm more inclined to kill the entire libxfs buffer cache
implementation and MRUs and port the kernel code across with it's
reference based LRU and shrinker. And with that, use AIO so that we
don't need huge numbers of prefetch threads to keep IO in flight.

Getting rid of the repair prefetcher threads removes the major
concurrency component that is placed on the cache. Using the kernel
infrastrucutre also moves from a global cache to a per-ag cache
which matches how xfs_repair operates and hence further reduces lock
contention. i.e. it allows threads working within an AG to work
without interfence from other AGs.

Basically, we are hitting on the scalability limits of the libxfs
architecture right now, and we have an architecure in the kernel
that scales way beyond what we have in userspace. And it fits right
in with the way userspace algorithms operate.

Add to that the need for AIO and delwri lists to scale mkfs to
really large filesystems, and it really comes down to "we need to
port the kernel code and move to AIO" rather than tinker around the
edges with an architecture that we can't easily scale further that
it current does...

Cheers,

Dave
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2018-11-03  8:35 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-30 11:20 [PATCH 0/7] xfs_repair: scale to 150,000 iops Dave Chinner
2018-10-30 11:20 ` [PATCH 1/7] Revert "xfs_repair: treat zero da btree pointers as corruption" Dave Chinner
2018-10-30 17:20   ` Darrick J. Wong
2018-10-30 19:35     ` Eric Sandeen
2018-10-30 20:11       ` Dave Chinner
2018-10-30 11:20 ` [PATCH 2/7] repair: don't dirty inodes which are not unlinked Dave Chinner
2018-10-30 17:26   ` Darrick J. Wong
2018-10-30 20:03   ` Eric Sandeen
2018-10-30 20:09     ` Eric Sandeen
2018-10-30 20:34       ` Dave Chinner
2018-10-30 20:40         ` Eric Sandeen
2018-10-30 20:58           ` Dave Chinner
2018-10-30 11:20 ` [PATCH 3/7] cache: prevent expansion races Dave Chinner
2018-10-30 17:39   ` Darrick J. Wong
2018-10-30 20:35     ` Dave Chinner
2018-10-31 17:13   ` Brian Foster
2018-11-01  1:27     ` Dave Chinner
2018-11-01 13:17       ` Brian Foster
2018-11-01 21:23         ` Dave Chinner
2018-11-02 11:31           ` Brian Foster
2018-11-02 23:26             ` Dave Chinner [this message]
2018-10-30 11:20 ` [PATCH 4/7] workqueue: bound maximum queue depth Dave Chinner
2018-10-30 17:58   ` Darrick J. Wong
2018-10-30 20:53     ` Dave Chinner
2018-10-31 17:14       ` Brian Foster
2018-10-30 11:20 ` [PATCH 5/7] repair: Protect bad inode list with mutex Dave Chinner
2018-10-30 17:44   ` Darrick J. Wong
2018-10-30 20:54     ` Dave Chinner
2018-10-30 11:20 ` [PATCH 6/7] repair: protect inode chunk tree records with a mutex Dave Chinner
2018-10-30 17:46   ` Darrick J. Wong
2018-10-30 11:20 ` [PATCH 7/7] repair: parallelise phase 6 Dave Chinner
2018-10-30 17:51   ` Darrick J. Wong
2018-10-30 20:55     ` Dave Chinner
2018-11-07  5:44 ` [PATCH 0/7] xfs_repair: scale to 150,000 iops Arkadiusz Miśkiewicz
2018-11-07  6:48   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181102232613.GG19305@dastard \
    --to=david@fromorbit.com \
    --cc=bfoster@redhat.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.