All of lore.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Martin Svec <martin.svec@zoner.cz>
Cc: Dave Chinner <david@fromorbit.com>, linux-xfs@vger.kernel.org
Subject: Re: Quota-enabled XFS hangs during mount
Date: Mon, 23 Jan 2017 08:44:52 -0500	[thread overview]
Message-ID: <20170123134452.GA33287@bfoster.bfoster> (raw)
In-Reply-To: <43ca55d0-6762-d54f-5ba9-a83f9c1988f6@zoner.cz>

On Mon, Jan 23, 2017 at 10:44:20AM +0100, Martin Svec wrote:
> Hello Dave,
> 
> Any updates on this? It's a bit annoying to workaround the bug by increasing RAM just because of the
> initial quotacheck.
> 

Note that Dave is away on a bit of an extended vacation[1]. It looks
like he was in the process of fishing through the code to spot any
potential problems related to quotacheck+reclaim. I see you've cc'd him
directly so we'll see if we get a response wrt to if he got anywhere
with that...

Skimming back through this thread, it looks like we have an issue where
quota check is not quite reliable in the event of reclaim, and you
appear to be reproducing this due to a probably unique combination of
large inode count and low memory.

Is my understanding correct that you've reproduced this on more recent
kernels than the original report? If so and we don't hear back from Dave
in a reasonable time, it might be useful to provide a metadump of the fs
if possible. That would allow us to restore in a similar low RAM vm
configuration, trigger quota check and try to reproduce directly...

Brian

[1] http://www.spinics.net/lists/linux-xfs/msg02836.html

> Thank you,
> Martin
> 
> Dne 3.11.2016 v 21:40 Dave Chinner napsal(a):
> > On Thu, Nov 03, 2016 at 01:04:39PM +0100, Martin Svec wrote:
> >> Dne 3.11.2016 v 2:31 Dave Chinner napsal(a):
> >>> On Wed, Nov 02, 2016 at 05:31:00PM +0100, Martin Svec wrote:
> >>>>> How many inodes? How much RAM?
> >>>> orthosie:~# df -i
> >>>> Filesystem        Inodes   IUsed     IFree IUse% Mounted on
> >>>> /dev/sdd1      173746096 5214637 168531459    4% /www
> >>>>
> >>>> The virtual machine has 2 virtual cores and 2 GB RAM. None of it is a bottleneck, I think.
> >>> Even though you think this is irrelevant and not important, it
> >>> actually points me directly at a potential vector and a reason as to
> >>> why this is not a comonly seen problem.
> >>>
> >>> i.e. 5.2 million inodes with only 2GB RAM is enough to cause memory
> >>> pressure during a quotacheck. inode buffers alone will require
> >>> a minimum of 1.5GB RAM over the course of the quotacheck, and memory
> >>> reclaim will iterate cached dquots and try to flush them, thereby
> >>> exercising the flush lock /before/ the quotacheck scan completion
> >>> dquot writeback tries to take it.
> >>>
> >>> Now I need to go read code....
> >> Yes, that makes sense, I didn't know that quotacheck requires all inodes to be loaded in memory at
> >> the same time.
> > It doesn't require all inodes to be loaded into memory. Indeed, the
> > /inodes/ don't get cached during a quotacheck. What does get cached
> > is the metadata buffers that are traversed - th eproblem comes when
> > the caches are being reclaimed....
> >
> >> I temporarily increased virtual machine's RAM to 3GB and the
> >> problem is gone! Setting RAM back to 2GB reproducibly causes the
> >> quotacheck to freeze again, 1GB of RAM results in OOM... So you're
> >> right that the flush deadlock is triggered by a memory pressure.
> > Good to know.
> >
> >> Sorry, I forgot to attach sysrq-w output to the previous response. Here it is:
> > <snip>
> >
> > ok, nothing else apparently stuck, so this is likely a leaked lock
> > or completion.
> >
> > Ok, I'll keep looking.
> >
> > Cheers,
> >
> > Dave.
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2017-01-23 13:44 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-01 16:45 Quota-enabled XFS hangs during mount Martin Svec
2016-11-01 21:58 ` Dave Chinner
2016-11-02 16:31   ` Martin Svec
2016-11-03  1:31     ` Dave Chinner
2016-11-03 12:04       ` Martin Svec
2016-11-03 20:40         ` Dave Chinner
2017-01-23  9:44           ` Martin Svec
2017-01-23 13:44             ` Brian Foster [this message]
2017-01-23 22:06               ` Dave Chinner
2017-01-24 13:17               ` Martin Svec
2017-01-25 15:36                 ` Brian Foster
2017-01-25 22:17                 ` Brian Foster
2017-01-26 17:46                   ` Martin Svec
2017-01-26 19:12                     ` Brian Foster
2017-01-27 13:06                       ` Martin Svec
2017-01-27 17:07                         ` Brian Foster
2017-01-27 20:49                           ` Martin Svec
2017-01-27 21:00                             ` Martin Svec
2017-01-27 23:17                               ` Darrick J. Wong
2017-01-28 22:42                           ` Dave Chinner
2017-01-30 15:31                             ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170123134452.GA33287@bfoster.bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    --cc=martin.svec@zoner.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.