From: Michal Hocko <mhocko@kernel.org>
To: Xunlei Pang <xlpang@linux.alibaba.com>
Cc: Roman Gushchin <guro@fb.com>,
Johannes Weiner <hannes@cmpxchg.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 2/3] mm/vmscan: Enable kswapd to reclaim low-protected memory
Date: Mon, 3 Dec 2018 18:22:02 +0100 [thread overview]
Message-ID: <20181203172007.GG31738@dhcp22.suse.cz> (raw)
In-Reply-To: <54a3f0a6-6e7d-c620-97f2-ac567c057bc2@linux.alibaba.com>
On Mon 03-12-18 23:20:31, Xunlei Pang wrote:
> On 2018/12/3 下午7:56, Michal Hocko wrote:
> > On Mon 03-12-18 16:01:18, Xunlei Pang wrote:
> >> There may be cgroup memory overcommitment, it will become
> >> even common in the future.
> >>
> >> Let's enable kswapd to reclaim low-protected memory in case
> >> of memory pressure, to mitigate the global direct reclaim
> >> pressures which could cause jitters to the response time of
> >> lantency-sensitive groups.
> >
> > Please be more descriptive about the problem you are trying to handle
> > here. I haven't actually read the patch but let me emphasise that the
> > low limit protection is important isolation tool. And allowing kswapd to
> > reclaim protected memcgs is going to break the semantic as it has been
> > introduced and designed.
>
> We have two types of memcgs: online groups(important business)
> and offline groups(unimportant business). Online groups are
> all configured with MAX low protection, while offline groups
> are not at all protected(with default 0 low).
>
> When offline groups are overcommitted, the global memory pressure
> suffers. This will cause the memory allocations from online groups
> constantly go to the slow global direct reclaim in order to reclaim
> online's page caches, as kswap is not able to reclaim low-protection
> memory. low is not hard limit, it's reasonable to be reclaimed by
> kswapd if there's no other reclaimable memory.
I am sorry I still do not follow. What role do offline cgroups play.
Those are certainly not low mem protected because mem_cgroup_css_offline
will reset them to 0.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2018-12-03 17:22 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-03 8:01 [PATCH 1/3] mm/memcg: Fix min/low usage in propagate_protected_usage() Xunlei Pang
2018-12-03 8:01 ` [PATCH 2/3] mm/vmscan: Enable kswapd to reclaim low-protected memory Xunlei Pang
2018-12-03 11:56 ` Michal Hocko
2018-12-03 15:20 ` Xunlei Pang
2018-12-03 17:22 ` Michal Hocko [this message]
2018-12-04 2:40 ` Xunlei Pang
2018-12-04 7:25 ` Michal Hocko
2018-12-04 8:44 ` Xunlei Pang
2018-12-03 8:01 ` [PATCH 3/3] mm/memcg: Avoid reclaiming below hard protection Xunlei Pang
2018-12-03 11:57 ` Michal Hocko
2018-12-04 2:53 ` Xunlei Pang
2018-12-03 11:54 ` [PATCH 1/3] mm/memcg: Fix min/low usage in propagate_protected_usage() Michal Hocko
2018-12-03 14:49 ` Xunlei Pang
2018-12-03 18:00 ` Roman Gushchin
2018-12-05 8:58 ` Xunlei Pang
2018-12-05 23:11 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181203172007.GG31738@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=xlpang@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).