linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Colascione <dancol@google.com>
To: dave.hansen@intel.com
Cc: linux-mm@kvack.org, Tim Murray <timmurray@google.com>,
	Minchan Kim <minchan@kernel.org>
Subject: Re: Why do we let munmap fail?
Date: Mon, 21 May 2018 15:35:25 -0700	[thread overview]
Message-ID: <CAKOZuev=Pa6FkvxTPbeA1CcYG+oF2JM+JVL5ELHLZ--7wyr++g@mail.gmail.com> (raw)
In-Reply-To: <e6bdfa05-fa80-41d1-7b1d-51cf7e4ac9a1@intel.com>

On Mon, May 21, 2018 at 3:29 PM Dave Hansen <dave.hansen@intel.com> wrote:

> On 05/21/2018 03:20 PM, Daniel Colascione wrote:
> >> VMAs consume kernel memory and we can't reclaim them.  That's what it
> >> boils down to.
> > How is it different from memfd in that respect?

> I don't really know what you mean.

I should have been more clear. I meant, in general, that processes can
*already* ask the kernel to allocate memory on behalf of the process, and
sometimes this memory can't be reclaimed without an OOM kill. (You can swap
memfd/tmpfs contents, but for simplicity, imagine we're running without a
pagefile.)

> I know folks use memfd to figure out
> how much memory pressure we are under.  I guess that would trigger when
> you consume lots of memory with VMAs.

I think you're thinking of the VM pressure level special files, not memfd,
which creates an anonymous tmpfs file.

> VMAs are probably the most similar to things like page tables that are
> kernel memory that can't be directly reclaimed, but do get freed at
> OOM-kill-time.  But, VMAs are a bit harder than page tables because
> freeing a page worth of VMAs does not necessarily free an entire page.

I don't understand. We can reclaim memory used by VMAs by killing the
process or processes attached to the address space that owns those VMAs.
The OOM killer should Just Work. Why do we have to have some special limit
of VMA count?

  reply	other threads:[~2018-05-21 22:35 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-21 22:07 Why do we let munmap fail? Daniel Colascione
2018-05-21 22:12 ` Dave Hansen
2018-05-21 22:20   ` Daniel Colascione
2018-05-21 22:29     ` Dave Hansen
2018-05-21 22:35       ` Daniel Colascione [this message]
2018-05-21 22:48         ` Dave Hansen
2018-05-21 22:54           ` Daniel Colascione
2018-05-21 23:02             ` Dave Hansen
2018-05-21 23:16               ` Daniel Colascione
2018-05-21 23:32                 ` Dave Hansen
2018-05-22  0:00                   ` Daniel Colascione
2018-05-22  0:22                     ` Matthew Wilcox
2018-05-22  0:38                       ` Daniel Colascione
2018-05-22  1:19                         ` Theodore Y. Ts'o
2018-05-22  1:41                           ` Daniel Colascione
2018-05-22  2:09                             ` Daniel Colascione
2018-05-22  2:11                             ` Matthew Wilcox
2018-05-22  1:22                         ` Matthew Wilcox
2018-05-22  5:34                     ` Nicholas Piggin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKOZuev=Pa6FkvxTPbeA1CcYG+oF2JM+JVL5ELHLZ--7wyr++g@mail.gmail.com' \
    --to=dancol@google.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=timmurray@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).