All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alexander Polakov <apolyakov@beget.ru>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-mm@kvack.org, linux-xfs@vger.kernel.org,
	bugzilla-daemon@bugzilla.kernel.org
Subject: Re: [Bug 192981] New: page allocation stalls
Date: Thu, 16 Feb 2017 13:56:30 +0300	[thread overview]
Message-ID: <07ee50bc-8220-dda8-07f9-369758603df9@beget.ru> (raw)
In-Reply-To: <20170215180859.GB62565@bfoster.bfoster>

On 02/15/2017 09:09 PM, Brian Foster wrote:
> Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> other shrinkers and free up memory that way, perhaps. This sounds kind
> of similar to the issue brought up previously here[1], but not quite the
> same in that instead of backing off of locking to allow other shrinkers
> to progress, we back off of memory allocations required to free up
> inodes (memory).
>
> In theory, I think something analogous to a trylock for inode to buffer
> mappings that are no longer cached (or more specifically, cannot
> currently be allocated) may work around this, but it's not immediately
> clear to me whether that's a proper fix (it's also probably not a
> trivial change either). I'm still kind of curious why we end up with
> dirty inodes with reclaimed buffers. If this problem repeats, is it
> always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> xfs_imap_to_bp())?

Looks like it is.

> How many independent filesystems are you running this workload against?

storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
15
storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
44

> Can you describe the workload in more detail?

This is a backup server, we're running rsync. At night our production 
servers rsync their files to this server (a lot of small files).

> ...
>>> The bz shows you have non-default vm settings such as
>>> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
>>> aggressive inode reclaim, yet the code workaround here is to bypass XFS
>>> inode reclaim. Out of curiousity, have you reproduced this problem using
>>> the default vfs_cache_pressure value (or if so, possibly moving it in
>>> the other direction)?
>>
>> Yes, we've tried that, it had about 0 influence.
>>
>
> Which.. with what values? And by zero influence, do you simply mean the
> stall still occurred or you have some other measurement of slab sizes or
> some such that are unaffected?

Unfortunately I don't have slab statistics at hand. Stalls and following 
OOM situation still occured with this setting at 100.

-- 
Alexander Polakov | system software engineer | https://beget.com

WARNING: multiple messages have this Message-ID (diff)
From: Alexander Polakov <apolyakov@beget.ru>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-mm@kvack.org, linux-xfs@vger.kernel.org,
	bugzilla-daemon@bugzilla.kernel.org
Subject: Re: [Bug 192981] New: page allocation stalls
Date: Thu, 16 Feb 2017 13:56:30 +0300	[thread overview]
Message-ID: <07ee50bc-8220-dda8-07f9-369758603df9@beget.ru> (raw)
In-Reply-To: <20170215180859.GB62565@bfoster.bfoster>

On 02/15/2017 09:09 PM, Brian Foster wrote:
> Ah, Ok. It sounds like this allows the reclaim thread to carry on into
> other shrinkers and free up memory that way, perhaps. This sounds kind
> of similar to the issue brought up previously here[1], but not quite the
> same in that instead of backing off of locking to allow other shrinkers
> to progress, we back off of memory allocations required to free up
> inodes (memory).
>
> In theory, I think something analogous to a trylock for inode to buffer
> mappings that are no longer cached (or more specifically, cannot
> currently be allocated) may work around this, but it's not immediately
> clear to me whether that's a proper fix (it's also probably not a
> trivial change either). I'm still kind of curious why we end up with
> dirty inodes with reclaimed buffers. If this problem repeats, is it
> always with a similar stack (i.e., reclaim -> xfs_iflush() ->
> xfs_imap_to_bp())?

Looks like it is.

> How many independent filesystems are you running this workload against?

storage9 : ~ [0] # mount|grep storage|grep xfs|wc -l
15
storage9 : ~ [0] # mount|grep storage|grep ext4|wc -l
44

> Can you describe the workload in more detail?

This is a backup server, we're running rsync. At night our production 
servers rsync their files to this server (a lot of small files).

> ...
>>> The bz shows you have non-default vm settings such as
>>> 'vm.vfs_cache_pressure = 200.' My understanding is that prefers
>>> aggressive inode reclaim, yet the code workaround here is to bypass XFS
>>> inode reclaim. Out of curiousity, have you reproduced this problem using
>>> the default vfs_cache_pressure value (or if so, possibly moving it in
>>> the other direction)?
>>
>> Yes, we've tried that, it had about 0 influence.
>>
>
> Which.. with what values? And by zero influence, do you simply mean the
> stall still occurred or you have some other measurement of slab sizes or
> some such that are unaffected?

Unfortunately I don't have slab statistics at hand. Stalls and following 
OOM situation still occured with this setting at 100.

-- 
Alexander Polakov | system software engineer | https://beget.com

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-02-16 10:56 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <bug-192981-27@https.bugzilla.kernel.org/>
2017-01-23 21:51 ` [Bug 192981] New: page allocation stalls Andrew Morton
2017-01-30 15:11   ` Alexander Polakov
2017-02-01 15:27   ` Michal Hocko
2017-02-15 12:56   ` Alexander Polakov
2017-02-15 12:56     ` Alexander Polakov
2017-02-15 16:05     ` Brian Foster
2017-02-15 16:05       ` Brian Foster
2017-02-15 16:52       ` Alexander Polakov
2017-02-15 16:52         ` Alexander Polakov
2017-02-15 18:09         ` Brian Foster
2017-02-15 18:09           ` Brian Foster
2017-02-16 10:56           ` Alexander Polakov [this message]
2017-02-16 10:56             ` Alexander Polakov
2017-02-16 17:20             ` Brian Foster
2017-02-16 17:20               ` Brian Foster
2017-02-16 22:21               ` Dave Chinner
2017-02-16 22:21                 ` Dave Chinner
2017-02-17 11:11                 ` Tetsuo Handa
2017-02-17 11:11                   ` Tetsuo Handa
2017-02-17 23:58                   ` Dave Chinner
2017-02-17 23:58                     ` Dave Chinner
2017-02-17 19:05                 ` Brian Foster
2017-02-17 19:05                   ` Brian Foster
2017-02-17 23:52                   ` Dave Chinner
2017-02-17 23:52                     ` Dave Chinner
2017-02-18 13:05                     ` Brian Foster
2017-02-18 13:05                       ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=07ee50bc-8220-dda8-07f9-369758603df9@beget.ru \
    --to=apolyakov@beget.ru \
    --cc=bfoster@redhat.com \
    --cc=bugzilla-daemon@bugzilla.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.