Linux-XFS Archive on lore.kernel.org
 help / color / Atom feed
From: "Darrick J. Wong" <darrick.wong@oracle.com>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs <linux-xfs@vger.kernel.org>
Subject: Re: [PATCH] xfs: don't flush the entire filesystem when a buffered write runs out of space
Date: Thu, 26 Mar 2020 19:51:53 -0700
Message-ID: <20200327025153.GP29351@magnolia> (raw)
In-Reply-To: <20200327022714.GQ10776@dread.disaster.area>

On Fri, Mar 27, 2020 at 01:27:14PM +1100, Dave Chinner wrote:
> On Thu, Mar 26, 2020 at 06:45:58PM -0700, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@oracle.com>
> > 
> > A customer reported rcu stalls and softlockup warnings on a computer
> > with many CPU cores and many many more IO threads trying to write to a
> > filesystem that is totally out of space.  Subsequent analysis pointed to
> > the many many IO threads calling xfs_flush_inodes -> sync_inodes_sb,
> > which causes a lot of wb_writeback_work to be queued.  The writeback
> > worker spends so much time trying to wake the many many threads waiting
> > for writeback completion that it trips the softlockup detector, and (in
> > this case) the system automatically reboots.
> 
> That doesn't sound right. Each writeback work that is queued via
> sync_inodes_sb should only have a single process waiting on it's
> completion. And how many threads do you actually have to need to
> wake up for it to trigger a 10s soft-lockup timeout?
> 
> More detail, please?

It's a two socket 64-core system with some sort of rdma/infiniband magic
and somewhere between 600-900 processes doing who knows what with the
magic.  Each of those threads *also* is writing trace data to its own
separate trace file (three private log files per process).  Hilariously
they never check the return code from write() so they keep pounding the
system forever.

(I don't know what the rdma/infiniband magic is, they won't tell me.)

When the filesystem fills up all the way (it's a 10G fs with 8,207
blocks free) they keep banging away on it until something finally dies.

I tried writing a dumb fstest to simulate the log writer part, but that
never succeeds in triggering the rcu stalls.

If you want the gory dmesg details I can send you some dmesg log.

> > In addition, they complain that the lengthy xfs_flush_inodes scan traps
> > all of those threads in uninterruptible sleep, which hampers their
> > ability to kill the program or do anything else to escape the situation.
> > 
> > Fix this by replacing the full filesystem flush (which is offloaded to a
> > workqueue which we then have to wait for) with directly flushing the
> > file that we're trying to write.
> 
> Which does nothing to flush -other- outstanding delalloc
> reservations and allow the eofblocks/cowblock scan to reclaim unused
> post-EOF speculative preallocations.
> 
> That's the purpose of the xfs_flush_inodes() - without it we can get
> very premature ENOSPC, especially on small filesystems when writing
> largish files in the background. So I'm not sure that dropping the
> sync is a viable solution. It is actually needed.

Yeah, I did kinda wonder about that...

> Perhaps we need to go back to the ancient code thatonly allowed XFS
> to run a single xfs_flush_inodes() at a time - everything else
> waited on the single flush to complete, then all returned at the
> same time...

That might work too.  Admittedly it's pretty silly to be running this
scan over and over and over considering that there's never going to be
any more free space.

--D

> 
> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com

  reply index

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-27  1:45 Darrick J. Wong
2020-03-27  2:27 ` Dave Chinner
2020-03-27  2:51   ` Darrick J. Wong [this message]
2020-03-27  4:50     ` Dave Chinner
2020-03-27  9:08 ` Christoph Hellwig
2020-03-27  9:09   ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200327025153.GP29351@magnolia \
    --to=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-XFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-xfs/0 linux-xfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-xfs linux-xfs/ https://lore.kernel.org/linux-xfs \
		linux-xfs@vger.kernel.org
	public-inbox-index linux-xfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-xfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git