All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: Alexander Polakov <apolyakov@beget.ru>,
	linux-mm@kvack.org, linux-xfs@vger.kernel.org,
	bugzilla-daemon@bugzilla.kernel.org
Subject: Re: [Bug 192981] New: page allocation stalls
Date: Fri, 17 Feb 2017 09:21:29 +1100	[thread overview]
Message-ID: <20170216222129.GB15349@dastard> (raw)
In-Reply-To: <20170216172034.GC11750@bfoster.bfoster>

On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > of similar to the issue brought up previously here[1], but not quite the
> > > same in that instead of backing off of locking to allow other shrinkers
> > > to progress, we back off of memory allocations required to free up
> > > inodes (memory).
> > > 
> > > In theory, I think something analogous to a trylock for inode to buffer
> > > mappings that are no longer cached (or more specifically, cannot
> > > currently be allocated) may work around this, but it's not immediately
> > > clear to me whether that's a proper fix (it's also probably not a
> > > trivial change either). I'm still kind of curious why we end up with
> > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > xfs_imap_to_bp())?
> > 
> > Looks like it is.
> > 
> > > How many independent filesystems are you running this workload against?
> > 
> > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > 15
> > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > 44
> > 
> 
> So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> similar size/geometry? If so, can you send representative xfs_info
> output for the fs'?
> 
> I'm reading back through that reclaim thread[1] and it appears this
> indeed is not a straightforward issue. It sounds like the summary is
> Chris hit the same general behavior you have and is helped by bypassing
> the synchronous nature of the shrinker. This allows other shrinkers to
> proceed, but this is not a general solution because other workloads
> depend on the synchronous shrinker behavior to throttle direct reclaim.
> I can't say I understand all of the details and architecture of how/why
> that is the case.

It's complicated, made worse by the state of flux of the mm reclaim
subsystem and the frequent regressions in behaviour that come and
go. This makes testing modifications to the shrinker behaviour
extremely challenging - trying to separate shirnker artifacts from
"something else has changed in memory reclaim" takes a lot of
time....

> FWIW, it sounds like the first order problem is that we generally don't
> want to find/flush dirty inodes from reclaim.

Right. because that forces out-of-order inode writeback and it
degenerates into blocking small random writes.

> A couple things that might
> help avoid this situation are more aggressive
> /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> smaller log size would cause more tail pushing pressure on the AIL
> instead of pressure originating from memory reclaim. The latter might
> not be so convenient if this is an already populated backup server,
> though.
> 
> Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> and obviously your hack here to defer inode reclaim entirely to the
> workqueue (I've CC'd Dave since it sounds like he might have been
> working on this further..).

I was working on a more solid set of changes, but every time I
updated the kernel tree I used as my base for development, the
baseline kernel reclaim behaviour would change. I'd isolate the
behavioural change, upgrade to the kernel that contained the fix,
and then trip over some new whacky behaviour that made no sense. I
spent more time in this loop than actually trying to fix the XFS
problem - chasing a moving target makes finding the root cause of
the reclaim stalls just about impossible. 

Brian, I can send you what I have but it's really just a bag of
bolts at this point because I was never able to validate that any of
the patches made a measurable improvement to reclaim behaviour under
any workload I ran.....

FWIW, the major problem with removing the blocking in inode reclaim
is the ease with which you can then trigger the OOM killer from
userspace.  The high level memory reclaim algorithms break down when
there are hundreds of direct reclaim processes hammering on reclaim
and reclaim stops making progress because it's skipping dirty
objects.  Direct reclaim ends up insufficiently throttled, so rather
than blocking it winds up reclaim priority and then declares OOM
because reclaim runs out of retries before sufficient memory has
been freed.

That, right now, looks to be an unsolvable problem without a major
rework of direct reclaim.  I've pretty much given up on ever getting
the unbound direct reclaim concurrency problem that is causing us
these problems fixed, so we are left to handle it in the subsystem
shrinkers as best we can. That leaves us with an unfortunate choice: 

	a) throttle excessive concurrency in the shrinker to prevent
	   IO breakdown, thereby causing reclaim latency bubbles
	   under load but having a stable, reliable system; or
	b) optimise for minimal reclaim latency and risk userspace
	   memory demand triggering the OOM killer whenever there
	   are lots of dirty inodes in the system.

Quite frankly, there's only one choice we can make in this
situation: reliability is always more important than performance.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

WARNING: multiple messages have this Message-ID (diff)
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: Alexander Polakov <apolyakov@beget.ru>,
	linux-mm@kvack.org, linux-xfs@vger.kernel.org,
	bugzilla-daemon@bugzilla.kernel.org
Subject: Re: [Bug 192981] New: page allocation stalls
Date: Fri, 17 Feb 2017 09:21:29 +1100	[thread overview]
Message-ID: <20170216222129.GB15349@dastard> (raw)
In-Reply-To: <20170216172034.GC11750@bfoster.bfoster>

On Thu, Feb 16, 2017 at 12:20:34PM -0500, Brian Foster wrote:
> On Thu, Feb 16, 2017 at 01:56:30PM +0300, Alexander Polakov wrote:
> > On 02/15/2017 09:09 PM, Brian Foster wrote:
> > > Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> > > other shrinkers and free up memory that way, perhaps. This sounds kind
> > > of similar to the issue brought up previously here[1], but not quite the
> > > same in that instead of backing off of locking to allow other shrinkers
> > > to progress, we back off of memory allocations required to free up
> > > inodes (memory).
> > > 
> > > In theory, I think something analogous to a trylock for inode to buffer
> > > mappings that are no longer cached (or more specifically, cannot
> > > currently be allocated) may work around this, but it's not immediately
> > > clear to me whether that's a proper fix (it's also probably not a
> > > trivial change either). I'm still kind of curious why we end up with
> > > dirty inodes with reclaimed buffers. If this problem repeats, is it
> > > always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> > > xfs_imap_to_bp())?
> > 
> > Looks like it is.
> > 
> > > How many independent filesystems are you running this workload against?
> > 
> > storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
> > 15
> > storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
> > 44
> > 
> 
> So a decent number of fs', more ext4 than XFS. Are the XFS fs' all of
> similar size/geometry? If so, can you send representative xfs_info
> output for the fs'?
> 
> I'm reading back through that reclaim thread[1] and it appears this
> indeed is not a straightforward issue. It sounds like the summary is
> Chris hit the same general behavior you have and is helped by bypassing
> the synchronous nature of the shrinker. This allows other shrinkers to
> proceed, but this is not a general solution because other workloads
> depend on the synchronous shrinker behavior to throttle direct reclaim.
> I can't say I understand all of the details and architecture of how/why
> that is the case.

It's complicated, made worse by the state of flux of the mm reclaim
subsystem and the frequent regressions in behaviour that come and
go. This makes testing modifications to the shrinker behaviour
extremely challenging - trying to separate shirnker artifacts from
"something else has changed in memory reclaim" takes a lot of
time....

> FWIW, it sounds like the first order problem is that we generally don't
> want to find/flush dirty inodes from reclaim.

Right. because that forces out-of-order inode writeback and it
degenerates into blocking small random writes.

> A couple things that might
> help avoid this situation are more aggressive
> /proc/sys/fs/xfs/xfssyncd_centisecs tuning or perhaps considering a
> smaller log size would cause more tail pushing pressure on the AIL
> instead of pressure originating from memory reclaim. The latter might
> not be so convenient if this is an already populated backup server,
> though.
> 
> Beyond that, there's Chris' patch, another patch that Dave proposed[2],
> and obviously your hack here to defer inode reclaim entirely to the
> workqueue (I've CC'd Dave since it sounds like he might have been
> working on this further..).

I was working on a more solid set of changes, but every time I
updated the kernel tree I used as my base for development, the
baseline kernel reclaim behaviour would change. I'd isolate the
behavioural change, upgrade to the kernel that contained the fix,
and then trip over some new whacky behaviour that made no sense. I
spent more time in this loop than actually trying to fix the XFS
problem - chasing a moving target makes finding the root cause of
the reclaim stalls just about impossible. 

Brian, I can send you what I have but it's really just a bag of
bolts at this point because I was never able to validate that any of
the patches made a measurable improvement to reclaim behaviour under
any workload I ran.....

FWIW, the major problem with removing the blocking in inode reclaim
is the ease with which you can then trigger the OOM killer from
userspace.  The high level memory reclaim algorithms break down when
there are hundreds of direct reclaim processes hammering on reclaim
and reclaim stops making progress because it's skipping dirty
objects.  Direct reclaim ends up insufficiently throttled, so rather
than blocking it winds up reclaim priority and then declares OOM
because reclaim runs out of retries before sufficient memory has
been freed.

That, right now, looks to be an unsolvable problem without a major
rework of direct reclaim.  I've pretty much given up on ever getting
the unbound direct reclaim concurrency problem that is causing us
these problems fixed, so we are left to handle it in the subsystem
shrinkers as best we can. That leaves us with an unfortunate choice: 

	a) throttle excessive concurrency in the shrinker to prevent
	   IO breakdown, thereby causing reclaim latency bubbles
	   under load but having a stable, reliable system; or
	b) optimise for minimal reclaim latency and risk userspace
	   memory demand triggering the OOM killer whenever there
	   are lots of dirty inodes in the system.

Quite frankly, there's only one choice we can make in this
situation: reliability is always more important than performance.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-02-16 22:21 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <bug-192981-27@https.bugzilla.kernel.org/>
2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
2017-01-30 15:11   ` Alexander Polakov
2017-02-01 15:27   ` Michal Hocko
2017-02-15 12:56   ` Alexander Polakov
2017-02-15 12:56     ` Alexander Polakov
2017-02-15 16:05     ` Brian Foster
2017-02-15 16:05       ` Brian Foster
2017-02-15 16:52       ` Alexander Polakov
2017-02-15 16:52         ` Alexander Polakov
2017-02-15 18:09         ` Brian Foster
2017-02-15 18:09           ` Brian Foster
2017-02-16 10:56           ` Alexander Polakov
2017-02-16 10:56             ` Alexander Polakov
2017-02-16 17:20             ` Brian Foster
2017-02-16 17:20               ` Brian Foster
2017-02-16 22:21               ` Dave Chinner [this message]
2017-02-16 22:21                 ` Dave Chinner
2017-02-17 11:11                 ` Tetsuo Handa
2017-02-17 11:11                   ` Tetsuo Handa
2017-02-17 23:58                   ` Dave Chinner
2017-02-17 23:58                     ` Dave Chinner
2017-02-17 19:05                 ` Brian Foster
2017-02-17 19:05                   ` Brian Foster
2017-02-17 23:52                   ` Dave Chinner
2017-02-17 23:52                     ` Dave Chinner
2017-02-18 13:05                     ` Brian Foster
2017-02-18 13:05                       ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170216222129.GB15349@dastard \
    --to=david@fromorbit.com \
    --cc=apolyakov@beget.ru \
    --cc=bfoster@redhat.com \
    --cc=bugzilla-daemon@bugzilla.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.