All of lore.kernel.org
 help / color / mirror / Atom feed
From: Martin Svec <martin.svec@zoner.cz>
To: Brian Foster <bfoster@redhat.com>
Cc: Dave Chinner <david@fromorbit.com>, linux-xfs@vger.kernel.org
Subject: Re: Quota-enabled XFS hangs during mount
Date: Fri, 27 Jan 2017 22:00:43 +0100	[thread overview]
Message-ID: <87e40a9f-751a-b9ac-8558-3556625b76c5@zoner.cz> (raw)
In-Reply-To: <5dc542b4-1fc7-2b67-0911-76914edf725e@zoner.cz>

Dne 27.1.2017 v 21:49 Martin Svec napsal(a):
> Dne 27.1.2017 v 18:07 Brian Foster napsal(a):
>> On Fri, Jan 27, 2017 at 02:06:45PM +0100, Martin Svec wrote:
>>> Dne 26.1.2017 v 20:12 Brian Foster napsal(a):
>>>> On Thu, Jan 26, 2017 at 06:46:42PM +0100, Martin Svec wrote:
>>>>> Hello,
>>>>>
>>>>> Dne 25.1.2017 v 23:17 Brian Foster napsal(a):
>>>>>> On Tue, Jan 24, 2017 at 02:17:36PM +0100, Martin Svec wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>> Dne 23.1.2017 v 14:44 Brian Foster napsal(a):
>>>>>>>> On Mon, Jan 23, 2017 at 10:44:20AM +0100, Martin Svec wrote:
>>>>>>>>> Hello Dave,
>>>>>>>>>
>>>>>>>>> Any updates on this? It's a bit annoying to workaround the bug by increasing RAM just because of the
>>>>>>>>> initial quotacheck.
>>>>>>>>>
>>>>>>>> Note that Dave is away on a bit of an extended vacation[1]. It looks
>>>>>>>> like he was in the process of fishing through the code to spot any
>>>>>>>> potential problems related to quotacheck+reclaim. I see you've cc'd him
>>>>>>>> directly so we'll see if we get a response wrt to if he got anywhere
>>>>>>>> with that...
>>>>>>>>
>>>>>>>> Skimming back through this thread, it looks like we have an issue where
>>>>>>>> quota check is not quite reliable in the event of reclaim, and you
>>>>>>>> appear to be reproducing this due to a probably unique combination of
>>>>>>>> large inode count and low memory.
>>>>>>>>
>>>>>>>> Is my understanding correct that you've reproduced this on more recent
>>>>>>>> kernels than the original report? 
>>>>>>> Yes, I repeated the tests using 4.9.3 kernel on another VM where we hit this issue.
>>>>>>>
>>>>>>> Configuration:
>>>>>>> * vSphere 5.5 virtual machine, 2 vCPUs, virtual disks residing on iSCSI VMFS datastore
>>>>>>> * Debian Jessie 64 bit webserver, vanilla kernel 4.9.3
>>>>>>> * 180 GB XFS data disk mounted as /www
>>>>>>>
>>>>>>> Quotacheck behavior depends on assigned RAM:
>>>>>>> * 2 or less GiB: mount /www leads to a storm of OOM kills including shell, ttys etc., so the system
>>>>>>> becomes unusable.
>>>>>>> * 3 GiB: mount /www task hangs in the same way as I reported in earlier in this thread.
>>>>>>> * 4 or more GiB: mount /www succeeds.
>>>>>>>
>>>>>> I was able to reproduce the quotacheck OOM situation on latest kernels.
>>>>>> This problem actually looks like a regression as of commit 17c12bcd3
>>>>>> ("xfs: when replaying bmap operations, don't let unlinked inodes get
>>>>>> reaped"), but I don't think that patch is the core problem. That patch
>>>>>> pulled up setting MS_ACTIVE on the superblock from after XFS runs
>>>>>> quotacheck to before it (for other reasons), which has a side effect of
>>>>>> causing inodes to be placed onto the lru once they are released. Before
>>>>>> this change, all inodes were immediately marked for reclaim once
>>>>>> released from quotacheck because the superblock had not been set active.
>>>>>>
>>>>>> The problem here is first that quotacheck issues a bulkstat and thus
>>>>>> grabs and releases every inode in the fs. The quotacheck occurs at mount
>>>>>> time, which means we still hold the s_umount lock and thus the shrinker
>>>>>> cannot run even though it is registered. Therefore, we basically just
>>>>>> populate the lru until we've consumed too much memory and blow up.
>>>>>>
>>>>>> I think the solution here is to preserve the quotacheck behavior prior
>>>>>> to commit 17c12bcd3 via something like the following:
>>>>>>
>>>>>> --- a/fs/xfs/xfs_qm.c
>>>>>> +++ b/fs/xfs/xfs_qm.c
>>>>>> @@ -1177,7 +1177,7 @@ xfs_qm_dqusage_adjust(
>>>>>>  	 * the case in all other instances. It's OK that we do this because
>>>>>>  	 * quotacheck is done only at mount time.
>>>>>>  	 */
>>>>>> -	error = xfs_iget(mp, NULL, ino, 0, XFS_ILOCK_EXCL, &ip);
>>>>>> +	error = xfs_iget(mp, NULL, ino, XFS_IGET_DONTCACHE, XFS_ILOCK_EXCL, &ip);
>>>>>>  	if (error) {
>>>>>>  		*res = BULKSTAT_RV_NOTHING;
>>>>>>  		return error;
>>>>>>
>>>>>> ... which allows quotacheck to run as normal in my quick tests. Could
>>>>>> you try this on your more recent kernel tests and see whether you still
>>>>>> reproduce any problems?
>>>>> The above patch fixes OOM issues and reduces overall memory consumption during quotacheck. However,
>>>>> it does not fix the original xfs_qm_flush_one() freezing. I'm still able to reproduce it with 1 GB
>>>>> of RAM or lower. Tested with 4.9.5 kernel.
>>>>>
>>>> Ok, thanks. I'll get that fix posted shortly.
>>>>
>>>> I hadn't tried reducing RAM any further. I dropped my vm down to 1GB and
>>>> I don't reproduce a hang. If I drop to 512MB, the mount actually crashes
>>>> due to what looks like the problem that djwong just fixed[1].
>>>>
>>>> With that one liner applied, it does look like I've hit a mount hang in
>>>> the quotacheck path. Note that I'm also running into OOM issues again
>>>> though, probably due to legitimately not having enough RAM for this vm.
>>>> Anyways, I'll see if I can dig anything out of that...
>>>>
>>>> FWIW, this is all on the latest for-next (4.10.0-rc5).
>>>>
>>>> [1] http://www.spinics.net/lists/linux-xfs/msg03869.html
>>>>
>>>>> If it makes sense to you, I can rsync the whole filesystem to a new XFS volume and repeat the tests.
>>>>> At least, that could tell us if the problem depends on a particular state of on-disk metadata
>>>>> structures or it's a general property of the given filesystem tree.
>>>>>
>>>> That couldn't hurt, thanks.
>>>>
>>> Well, after rsync to a fresh non-resized XFS volume, I still hit the mount hang with 1GB RAM.
>>>
>> The problem looks like a race between dquot reclaim and quotacheck. The
>> high level sequence of events is as follows:
>>
>>  - During quotacheck, xfs_qm_dqiterate() walks the physical dquot
>>    buffers and queues them to the delwri queue.
>>  - Next, kswapd kicks in and attempts to reclaim a dquot that is backed
>>    by a buffer on the quotacheck delwri queue. xfs_qm_dquot_isolate()
>>    acquires the flush lock and attempts to queue to the reclaim delwri
>>    queue. This silently fails because the buffer is already queued.
>>
>>    From this point forward, the dquot flush lock is not going to be
>>    released until the buffer is submitted for I/O and completed via
>>    quotacheck.
>>  - Quotacheck continues on to the xfs_qm_flush_one() pass, hits the
>>    dquot in question and waits on the flush lock to issue the flush of
>>    the recalculated values. *deadlock*
>>
>> There are at least a few ways to deal with this. We could do something
>> granular to fix up the reclaim path to check whether the buffer is
>> already queued or something of that nature before we actually invoke the
>> flush. I think this is effectively pointless, however, because the first
>> part of quotacheck walks and queues all physical dquot buffers anyways.
>>
>> In other words, I think dquot reclaim during quotacheck should probably
>> be bypassed. Given that, we could either adjust when the shrinker is
>> registered until after quotacheck or set a flag somewhere to cause dquot
>> reclaim to back out when quotacheck is running. I opted for something
>> like the latter. Care to test the appended patch?
>>
>> Note that I think this does mean that you could still have low memory
>> issues if you happen to have a lot of quotas defined..
>>
> Looks good, no more hangs with 1 GB. Thank you, Brian.
>
> If I further reduce RAM to 512 MB, mount succeeds too but multiple "BUG: Bad page state in process
> mount" errors are reported. Is it one of the expected low memory issues?
>
> Martin
>
Well, reading back through this thread, this might be related to patch [1] which I didn't apply to
4.9.5. I'll retry it next week.

[1] http://www.spinics.net/lists/linux-xfs/msg03869.html

Martin




  reply	other threads:[~2017-01-27 22:54 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-01 16:45 Martin Svec
2016-11-01 21:58 ` Dave Chinner
2016-11-02 16:31   ` Martin Svec
2016-11-03  1:31     ` Dave Chinner
2016-11-03 12:04       ` Martin Svec
2016-11-03 20:40         ` Dave Chinner
2017-01-23  9:44           ` Martin Svec
2017-01-23 13:44             ` Brian Foster
2017-01-23 22:06               ` Dave Chinner
2017-01-24 13:17               ` Martin Svec
2017-01-25 15:36                 ` Brian Foster
2017-01-25 22:17                 ` Brian Foster
2017-01-26 17:46                   ` Martin Svec
2017-01-26 19:12                     ` Brian Foster
2017-01-27 13:06                       ` Martin Svec
2017-01-27 17:07                         ` Brian Foster
2017-01-27 20:49                           ` Martin Svec
2017-01-27 21:00                             ` Martin Svec [this message]
2017-01-27 23:17                               ` Darrick J. Wong
2017-01-28 22:42                           ` Dave Chinner
2017-01-30 15:31                             ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87e40a9f-751a-b9ac-8558-3556625b76c5@zoner.cz \
    --to=martin.svec@zoner.cz \
    --cc=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    --subject='Re: Quota-enabled XFS hangs during mount' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.