linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: linux-mm@kvack.org, Tejun Heo <tj@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	kernel-team@fb.com, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm, memcg: reset low limit during memcg offlining
Date: Tue, 25 Jul 2017 13:06:42 +0100	[thread overview]
Message-ID: <20170725120642.GA12635@castle.DHCP.thefacebook.com> (raw)
In-Reply-To: <20170725115808.GE26723@dhcp22.suse.cz>

On Tue, Jul 25, 2017 at 01:58:08PM +0200, Michal Hocko wrote:
> On Tue 25-07-17 12:40:47, Roman Gushchin wrote:
> > A removed memory cgroup with a defined low limit and some belonging
> > pagecache has very low chances to be freed.
> > 
> > If a cgroup has been removed, there is likely no memory pressure inside
> > the cgroup, and the pagecache is protected from the external pressure
> > by the defined low limit. The cgroup will be freed only after
> > the reclaim of all belonging pages. And it will not happen until
> > there are any reclaimable memory in the system. That means,
> > there is a good chance, that a cold pagecache will reside
> > in the memory for an undefined amount of time, wasting
> > system resources.
> > 
> > Fix this issue by zeroing memcg->low during memcg offlining.
> 
> Very well spotted! This goes all the way down to low limit inclusion
> AFAICS. I would be even tempted to mark it for stable because hiding
> some memory from reclaim basically indefinitely is not good. We might
> have been just lucky nobody has noticed that yet.

I believe it's because there are not so many actual low limit users,
and those who do, are using some offstream patches to mitigate this issue.

Thanks!

Roman

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-25 12:07 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-25 11:40 [PATCH] mm, memcg: reset low limit during memcg offlining Roman Gushchin
2017-07-25 11:58 ` Michal Hocko
2017-07-25 12:06   ` Roman Gushchin [this message]
2017-07-25 12:05 ` Vladimir Davydov
2017-07-25 12:31   ` Roman Gushchin
2017-07-25 12:44     ` Michal Hocko
2017-07-26  8:30     ` Vladimir Davydov
2017-07-26 12:06       ` Tejun Heo
2017-07-27 13:04       ` [PATCH 1/2] mm, memcg: reset memory.low " Roman Gushchin
2017-07-27 13:04         ` [PATCH 2/2] cgroup: revert fa06235b8eb0 ("cgroup: reset css on destruction") Roman Gushchin
2017-07-27 13:52           ` Tejun Heo
2017-07-27 14:36           ` Johannes Weiner
2017-07-27 14:35         ` [PATCH 1/2] mm, memcg: reset memory.low during memcg offlining Johannes Weiner
2017-07-27 14:47         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170725120642.GA12635@castle.DHCP.thefacebook.com \
    --to=guro@fb.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=tj@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).