linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Liu Shixin <liushixin2@huawei.com>
To: Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Michal Hocko <mhocko@kernel.org>,
	Shakeel Butt <shakeelb@google.com>, Roman Gushchin <guro@fb.com>,
	Muchun Song <muchun.song@linux.dev>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	<cgroups@vger.kernel.org>, Nanyong Sun <sunnanyong@huawei.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [Question]: pagecache thrashing and hard to trigger OOM in cgroup
Date: Wed, 22 Nov 2023 11:26:07 +0800	[thread overview]
Message-ID: <c2f4a2fa-3bde-72ce-66f5-db81a373fdbc@huawei.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 1542 bytes --]

Hi everyone,

Recently, we meet an IO performance issue which caused by pagecache thrashing in
a cgroup and we found it is introduced by commit 815744d75152 ("mm:  memcontrol:
don't batch updates of local VM stats and events").

The problem can easily reproduced in docker environment. Firstly,create a container
with 4G memory limit and 2G swap limit, then run a program which allocate (6G - 50M)
anon memory so there are only 50M memory can be used and no swap space. Then
do "yum install gcc" and we can observed that the yum program is thrashing and IO
keep high for a long but didn't trigger oom. This affects other processes or containers
in the machine.

After analysis, we found there are large number of readahead failures during this time.
Since page allocation from pagecache readahead have __GFP_NORETRY flag, the oom
will be skipped when reach memcg limit. The pagecache is repeatedly allocated and
reclaimed, and the value of workset_refault_file is high. These readahead take a lot of
time, which consume a lot of IO throughput and impact the entire system. This keeps
for long times until other page allocation trigger oom.

By bisection, we finally found commit 815744d75152("mm:  memcontrol: don't batch
updates of local VM stats and events"). Before the commit, the process will trigger oom
in very short time. We suspect the difference is caused by performance changes.

Is there any good way to fix the problem? we prefer the process to be oom rather
than cause the system to be hung and affect other processes.

Thanks,

[-- Attachment #2: Type: text/html, Size: 3028 bytes --]

                 reply	other threads:[~2023-11-22  3:26 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c2f4a2fa-3bde-72ce-66f5-db81a373fdbc@huawei.com \
    --to=liushixin2@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=muchun.song@linux.dev \
    --cc=shakeelb@google.com \
    --cc=sunnanyong@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).