All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Chris Mason <clm@fb.com>
Cc: "linux-xfs@vger.kernel.org" <linux-xfs@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH 09/24] xfs: don't allow log IO to be throttled
Date: Tue, 6 Aug 2019 09:09:45 +1000	[thread overview]
Message-ID: <20190805230945.GX7777@dread.disaster.area> (raw)
In-Reply-To: <C823BAA1-18D5-4C25-9506-59A740817E8C@fb.com>

On Mon, Aug 05, 2019 at 06:32:51PM +0000, Chris Mason wrote:
> On 2 Aug 2019, at 19:28, Dave Chinner wrote:
> 
> > On Fri, Aug 02, 2019 at 02:11:53PM +0000, Chris Mason wrote:
> >> On 1 Aug 2019, at 19:58, Dave Chinner wrote:
> >> I can't really see bio->b_ioprio working without the rest of the IO
> >> controller logic creating a sensible system,
> >
> > That's exactly the problem we need to solve. The current situation
> > is ... untenable. Regardless of whether the io.latency controller
> > works well, the fact is that the wbt subsystem is active on -all-
> > configurations and the way it "prioritises" is completely broken.
> 
> Completely broken is probably a little strong.   Before wbt, it was 
> impossible to do buffered IO without periodically saturating the drive 
> in unexpected ways.  We've got a lot of data showing it helping, and 
> it's pretty easy to setup a new A/B experiment to demonstrate it's 
> usefulness in current kernels.  But that doesn't mean it's perfect.

I'm not arguing that wbt is useless, I'm just saying that it's
design w.r.t. IO prioritisation is fundamentally broken. Using
request types to try to infer priority just doesn't work, as I've
been trying to explain.

> >> framework to define weights etc.  My question is if it's worth trying
> >> inside of the wbt code, or if we should just let the metadata go
> >> through.
> >
> > As I said, that doesn't  solve the problem. We /want/ critical
> > journal IO to have higher priority that background metadata
> > writeback. Just ignoring REQ_META doesn't help us there - it just
> > moves the priority inversion to blocking on request queue tags.
> 
> Does XFS background metadata IO ever get waited on by critical journal 
> threads?

No. Background writeback (which, with this series, is the only way
metadata gets written in XFS) is almost entirely non-blocking until
IO submission occurs. It will force the log if pinned items are
prevents the log tail from moving (hence blocking on log IO) but
largely it doesn't block on anything except IO submission.

The only thing that blocks on journal IO is CIL flushing and,
subsequently, anything that is waiting on a journal flush to
complete. CIL flushing happens in it's own workqueue, so it doesn't
block anything directly. The only operations that wait for log IO
require items to be stable in the journal (e.g. fsync()).

Starting a transactional change may block on metadata writeback. If
there isn't space in the log for the new transaction, it will kick
and wait for background metadata writeback to make progress and push
the tail of the log forwards.  And this may wait on journal IO if
pinned items need to be flushed to the log before writeback can
occur.

This is the way we prevent transactions requiring journal IO to
blocking on metadata writeback to make progress - we don't allow a
transaction to start until it is guaranteed that it can complete
without requiring journal IO to flush other metadata to the journal.
That way there is always space available in the log for all pending
journal IO to complete with a dependency no metadata writeback
making progress.

This "block on metadata writeback at transaction start" design means
data writeback can block on metadata writeback because we do
allocation transactions in the IO path. Which means data IO can
block behind metadata IO, which can block behind log IO, and that
largely defines the IO heirarchy in XFS.

Hence the IO priority order is very clear in XFS - it was designed
this way because you can't support things like guaranteed rate IO
storage applications (one of the prime use cases XFS was originally
designed for) without having a clear model for avoiding priority
inversions between data, metadata and the journal.

I'm not guessing about any of this - I know how all this is supposed
to work because I spent years at SGI working with people far smarter
than me supporting real-time IO applications working along with
real-time IO schedulers in a real time kernel (i.e.  Irix). I don't
make this stuff up for fun or to argue, I say stuff because I know
how it's supposed to work.

And, FWIW, Irix also had a block layer writeback throttling
mechanism to prevent bulk data writeback from thrashing disks and
starving higher priority IO. It was also fully IO priority aware -
this stuff isn't rocket science, and Linux is not the first OS to
ever implement this sort of functionality. Linux was not my first
rodeo....

> My understanding is that all of the filesystems do this from 
> time to time.  Without a way to bump the priority of throttled 
> background metadata IO, I can't see how to avoid prio inversions without 
> running background metadata at the same prio as all of the critical 
> journal IO.

Perhaps you just haven't thought about it enough. :)

> > Core infrastructure needs to work without cgroups being configured
> > to confine everything in userspace to "safe" bounds, and right now
> > just running things in the root cgroup doesn't appear to work very
> > well at all.
> 
> I'm not disagreeing with this part, my real point is there isn't a 
> single answer.  It's possible for swap to be critical to the running of 
> the box in some workloads, and totally unimportant in others.

Sure, but that only indicates that we need to be able to adjust the
priority of IO within certain bounds.

The problem is right now is that the default behaviour is pretty
nasty and core functionality is non-functional. It doesn't matter if
swap priority is adjustable or not, users should not have to tune
the kernel to use an esoteric cgroup configuration in order for the
kernel to function correctly out of the box.

I'm not sure when we lost sight of the fact we need to make the
default configurations work correctly first, and only then do we
worry about how tunable somethign is when the default behaviour has
been proven to be insufficient. Hiding bad behaviour behind custom
cgroup configuration does nobody any favours.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2019-08-05 23:10 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-01  2:17 [RFC] [PATCH 00/24] mm, xfs: non-blocking inode reclaim Dave Chinner
2019-08-01  2:17 ` [PATCH 01/24] mm: directed shrinker work deferral Dave Chinner
2019-08-02 15:27   ` Brian Foster
2019-08-04  1:49     ` Dave Chinner
2019-08-05 17:42       ` Brian Foster
2019-08-05 23:43         ` Dave Chinner
2019-08-06 12:27           ` Brian Foster
2019-08-06 22:22             ` Dave Chinner
2019-08-07 11:13               ` Brian Foster
2019-08-01  2:17 ` [PATCH 02/24] shrinkers: use will_defer for GFP_NOFS sensitive shrinkers Dave Chinner
2019-08-02 15:27   ` Brian Foster
2019-08-04  1:50     ` Dave Chinner
2019-08-01  2:17 ` [PATCH 03/24] mm: factor shrinker work calculations Dave Chinner
2019-08-02 15:08   ` Nikolay Borisov
2019-08-04  2:05     ` Dave Chinner
2019-08-02 15:31   ` Brian Foster
2019-08-01  2:17 ` [PATCH 04/24] shrinker: defer work only to kswapd Dave Chinner
2019-08-02 15:34   ` Brian Foster
2019-08-04 16:48   ` Nikolay Borisov
2019-08-04 21:37     ` Dave Chinner
2019-08-07 16:12   ` kbuild test robot
2019-08-07 18:00   ` kbuild test robot
2019-08-01  2:17 ` [PATCH 05/24] shrinker: clean up variable types and tracepoints Dave Chinner
2019-08-01  2:17 ` [PATCH 06/24] mm: reclaim_state records pages reclaimed, not slabs Dave Chinner
2019-08-01  2:17 ` [PATCH 07/24] mm: back off direct reclaim on excessive shrinker deferral Dave Chinner
2019-08-01  2:17 ` [PATCH 08/24] mm: kswapd backoff for shrinkers Dave Chinner
2019-08-01  2:17 ` [PATCH 09/24] xfs: don't allow log IO to be throttled Dave Chinner
2019-08-01 13:39   ` Chris Mason
2019-08-01 23:58     ` Dave Chinner
2019-08-02  8:12       ` Christoph Hellwig
2019-08-02 14:11       ` Chris Mason
2019-08-02 18:34         ` Matthew Wilcox
2019-08-02 23:28         ` Dave Chinner
2019-08-05 18:32           ` Chris Mason
2019-08-05 23:09             ` Dave Chinner [this message]
2019-08-01  2:17 ` [PATCH 10/24] xfs: fix missed wakeup on l_flush_wait Dave Chinner
2019-08-01  2:17 ` [PATCH 11/24] xfs:: account for memory freed from metadata buffers Dave Chinner
2019-08-01  8:16   ` Christoph Hellwig
2019-08-01  9:21     ` Dave Chinner
2019-08-06  5:51       ` Christoph Hellwig
2019-08-01  2:17 ` [PATCH 12/24] xfs: correctly acount for reclaimable slabs Dave Chinner
2019-08-06  5:52   ` Christoph Hellwig
2019-08-06 21:05     ` Dave Chinner
2019-08-01  2:17 ` [PATCH 13/24] xfs: synchronous AIL pushing Dave Chinner
2019-08-05 17:51   ` Brian Foster
2019-08-05 23:21     ` Dave Chinner
2019-08-06 12:29       ` Brian Foster
2019-08-01  2:17 ` [PATCH 14/24] xfs: tail updates only need to occur when LSN changes Dave Chinner
2019-08-05 17:53   ` Brian Foster
2019-08-05 23:28     ` Dave Chinner
2019-08-06  5:33       ` Dave Chinner
2019-08-06 12:53         ` Brian Foster
2019-08-06 21:11           ` Dave Chinner
2019-08-01  2:17 ` [PATCH 15/24] xfs: eagerly free shadow buffers to reduce CIL footprint Dave Chinner
2019-08-05 18:03   ` Brian Foster
2019-08-05 23:33     ` Dave Chinner
2019-08-06 12:57       ` Brian Foster
2019-08-06 21:21         ` Dave Chinner
2019-08-01  2:17 ` [PATCH 16/24] xfs: Lower CIL flush limit for large logs Dave Chinner
2019-08-04 17:12   ` Nikolay Borisov
2019-08-01  2:17 ` [PATCH 17/24] xfs: don't block kswapd in inode reclaim Dave Chinner
2019-08-06 18:21   ` Brian Foster
2019-08-06 21:27     ` Dave Chinner
2019-08-07 11:14       ` Brian Foster
2019-08-01  2:17 ` [PATCH 18/24] xfs: reduce kswapd blocking on inode locking Dave Chinner
2019-08-06 18:22   ` Brian Foster
2019-08-06 21:33     ` Dave Chinner
2019-08-07 11:30       ` Brian Foster
2019-08-07 23:16         ` Dave Chinner
2019-08-01  2:17 ` [PATCH 19/24] xfs: kill background reclaim work Dave Chinner
2019-08-01  2:17 ` [PATCH 20/24] xfs: use AIL pushing for inode reclaim IO Dave Chinner
2019-08-07 18:09   ` Brian Foster
2019-08-07 23:10     ` Dave Chinner
2019-08-08 16:20       ` Brian Foster
2019-08-01  2:17 ` [PATCH 21/24] xfs: remove mode from xfs_reclaim_inodes() Dave Chinner
2019-08-01  2:17 ` [PATCH 22/24] xfs: track reclaimable inodes using a LRU list Dave Chinner
2019-08-08 16:36   ` Brian Foster
2019-08-09  0:10     ` Dave Chinner
2019-08-01  2:17 ` [PATCH 23/24] xfs: reclaim inodes from the LRU Dave Chinner
2019-08-08 16:39   ` Brian Foster
2019-08-09  1:20     ` Dave Chinner
2019-08-09 12:36       ` Brian Foster
2019-08-11  2:17         ` Dave Chinner
2019-08-11 12:46           ` Brian Foster
2019-08-01  2:17 ` [PATCH 24/24] xfs: remove unusued old inode reclaim code Dave Chinner
2019-08-06  5:57 ` [RFC] [PATCH 00/24] mm, xfs: non-blocking inode reclaim Christoph Hellwig
2019-08-06 21:37   ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190805230945.GX7777@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=axboe@kernel.dk \
    --cc=clm@fb.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.