From: Dave Hansen <dave.hansen@intel.com>
To: Daniel Colascione <dancol@google.com>
Cc: linux-mm@kvack.org, Tim Murray <timmurray@google.com>,
Minchan Kim <minchan@kernel.org>
Subject: Re: Why do we let munmap fail?
Date: Mon, 21 May 2018 15:48:05 -0700 [thread overview]
Message-ID: <20eeca79-0813-a921-8b86-4c2a0c98a1a1@intel.com> (raw)
In-Reply-To: <CAKOZuev=Pa6FkvxTPbeA1CcYG+oF2JM+JVL5ELHLZ--7wyr++g@mail.gmail.com>
On 05/21/2018 03:35 PM, Daniel Colascione wrote:
>> I know folks use memfd to figure out
>> how much memory pressure we are under. I guess that would trigger when
>> you consume lots of memory with VMAs.
>
> I think you're thinking of the VM pressure level special files, not memfd,
> which creates an anonymous tmpfs file.
Yep, you're right.
>> VMAs are probably the most similar to things like page tables that are
>> kernel memory that can't be directly reclaimed, but do get freed at
>> OOM-kill-time. But, VMAs are a bit harder than page tables because
>> freeing a page worth of VMAs does not necessarily free an entire page.
>
> I don't understand. We can reclaim memory used by VMAs by killing the
> process or processes attached to the address space that owns those VMAs.
> The OOM killer should Just Work. Why do we have to have some special limit
> of VMA count?
The OOM killer doesn't take the VMA count into consideration as far as I
remember. I can't think of any reason why not except for the internal
fragmentation problem.
The current VMA limit is ~12MB of VMAs per process, which is quite a
bit. I think it would be reasonable to start considering that in OOM
decisions, although it's surely inconsequential except on very small
systems.
There are also certainly denial-of-service concerns if you allow
arbitrary numbers of VMAs. The rbtree, for instance, is O(log(n)), but
I 'd be willing to be there are plenty of things that fall over if you
let the ~65k limit get 10x or 100x larger.
next prev parent reply other threads:[~2018-05-21 22:48 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-21 22:07 Why do we let munmap fail? Daniel Colascione
2018-05-21 22:12 ` Dave Hansen
2018-05-21 22:20 ` Daniel Colascione
2018-05-21 22:29 ` Dave Hansen
2018-05-21 22:35 ` Daniel Colascione
2018-05-21 22:48 ` Dave Hansen [this message]
2018-05-21 22:54 ` Daniel Colascione
2018-05-21 23:02 ` Dave Hansen
2018-05-21 23:16 ` Daniel Colascione
2018-05-21 23:32 ` Dave Hansen
2018-05-22 0:00 ` Daniel Colascione
2018-05-22 0:22 ` Matthew Wilcox
2018-05-22 0:38 ` Daniel Colascione
2018-05-22 1:19 ` Theodore Y. Ts'o
2018-05-22 1:41 ` Daniel Colascione
2018-05-22 2:09 ` Daniel Colascione
2018-05-22 2:11 ` Matthew Wilcox
2018-05-22 1:22 ` Matthew Wilcox
2018-05-22 5:34 ` Nicholas Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20eeca79-0813-a921-8b86-4c2a0c98a1a1@intel.com \
--to=dave.hansen@intel.com \
--cc=dancol@google.com \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=timmurray@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).