linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	 Andrew Morton <akpm@linux-foundation.org>,
	Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH] mm, memcg: clear page protection when memcg oom group happens
Date: Mon, 25 Nov 2019 22:44:41 +0800	[thread overview]
Message-ID: <CALOAHbD+TtFzGdWV5fTikUM9_6H7cRUZOgnxipyh5dzOh+d8VQ@mail.gmail.com> (raw)
In-Reply-To: <20191125142150.GP31714@dhcp22.suse.cz>

On Mon, Nov 25, 2019 at 10:21 PM Michal Hocko <mhocko@kernel.org> wrote:
>
> On Mon 25-11-19 22:11:15, Yafang Shao wrote:
> > On Mon, Nov 25, 2019 at 8:45 PM Michal Hocko <mhocko@kernel.org> wrote:
> > >
> > > On Mon 25-11-19 20:37:52, Yafang Shao wrote:
> > > > On Mon, Nov 25, 2019 at 8:31 PM Michal Hocko <mhocko@kernel.org> wrote:
> [...]
> > > > > Again, what is a problem that you are trying to fix?
> > > >
> > > > When there's no processes running in a memcg, for example if they are
> > > > killed by OOM killer, we can't reclaim the file page cache protected
> > > > by memory.min of this memcg. These file page caches are useless in
> > > > this case.
> > > > That's what I'm trying to fix.
> > >
> > > Could you be more specific please? I would assume that the group oom
> > > configured memcg would either restart its workload when killed (that is
> > > why you want to kill the whole workload to restart it cleanly in many
> > > case) or simply tear down the memcg altogether.
> > >
> >
> > Yes, we always restart it automatically if these processes are exit
> > (no matter because of OOM or some other reason).
> > It is safe to do that if OOM happens, because OOM is always because of
> > anon pages leaked and the restart can free these anon pages.
>
> No this is an incorrect assumption. The OOM might happen for many
> different reasons.
>
> > But there may be some cases that we can't success to restart it, while
> > if that happens the protected pages will be never be reclaimed until
> > the admin reset it or make this memcg offline.
>
> If the workload cannot be restarted for whatever reason then you need an
> admin intervention and a proper cleanup. That would include resetting
> reclaim protection when in use.
>
> > When there're no processes, we don't need to protect the pages. You
> > can consider it as 'fault tolerance' .
>
> I have already tried to explain why this is a bold statement that
> doesn't really hold universally and that the kernel doesn't really have
> enough information to make an educated guess.
>

I didn't mean we must relcaim the protected pages in all cases, while
I mean sometimes we should relcaim the protected pages.
If the kernel can't make an educated guess, we can tell the kernel to
do it, for example, to introduce a new controller file to tell the
kernel whehter or not relcaim the protected pages if there're no
proceses running.

> > > In other words why do you care about the oom killer case so much? It is
> > > not different that handling a lingering memcg with the workload already
> > > finished. You simply have no way to know whether the reclaim protection
> > > is still required. Admin is supposed to either offline the memcg that is
> > > no longer used or drop the reclaim protection once it is not needed
> > > because that has some visible consequences on the overall system
> > > operation.
> >
> > Actually what I concern is the  case that there's no process running
> > but memory protection coninues protecting the file pages.
> > OOM is just one case of them.
>
> This sounds like a misconfiguration which should be handled by an admin.

That may be a misconfiguration,  but the kernel can do something
before the admin notice it.

Thanks

Yafang


      parent reply	other threads:[~2019-11-25 14:45 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-25 10:14 [PATCH] mm, memcg: clear page protection when memcg oom group happens Yafang Shao
2019-11-25 11:08 ` Michal Hocko
2019-11-25 11:37   ` Yafang Shao
2019-11-25 11:54     ` Michal Hocko
2019-11-25 12:17       ` Yafang Shao
2019-11-25 12:31         ` Michal Hocko
2019-11-25 12:37           ` Yafang Shao
2019-11-25 12:45             ` Michal Hocko
2019-11-25 14:11               ` Yafang Shao
2019-11-25 14:21                 ` Michal Hocko
2019-11-25 14:42                   ` Johannes Weiner
2019-11-25 14:45                     ` Yafang Shao
2019-11-26  3:52                     ` Yafang Shao
2019-11-26  7:31                       ` Michal Hocko
2019-11-26  9:35                         ` Yafang Shao
2019-11-26  9:50                           ` Michal Hocko
2019-11-26 10:02                             ` Yafang Shao
2019-11-26 10:22                               ` Michal Hocko
2019-11-26 10:56                                 ` Yafang Shao
2019-11-25 14:44                   ` Yafang Shao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALOAHbD+TtFzGdWV5fTikUM9_6H7cRUZOgnxipyh5dzOh+d8VQ@mail.gmail.com \
    --to=laoar.shao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).