From: Michal Hocko <mhocko@suse.com>
To: Mike Rapoport <rppt@kernel.org>
Cc: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
Borislav Petkov <bp@alien8.de>,
Catalin Marinas <catalin.marinas@arm.com>,
Christopher Lameter <cl@linux.com>,
Dan Williams <dan.j.williams@intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Elena Reshetova <elena.reshetova@intel.com>,
"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
James Bottomley <jejb@linux.ibm.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>,
Matthew Wilcox <willy@infradead.org>,
Mark Rutland <mark.rutland@arm.com>,
Mike Rapoport <rppt@linux.ibm.com>,
Michael Kerrisk <mtk.manpages@gmail.com>,
Palmer Dabbelt <palmer@dabbelt.com>,
Paul Walmsley <paul.walmsley@sifive.com>,
Peter Zijlstra <peterz@infradead.org>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
Shuah Khan <shuah@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>,
Palmer Dabbelt <palmerdabbelt@google.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Thu, 28 Jan 2021 14:01:06 +0100 [thread overview]
Message-ID: <YBK1kqL7JA7NePBQ@dhcp22.suse.cz> (raw)
In-Reply-To: <20210128092259.GB242749@kernel.org>
On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 01:08:23PM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> > > On 26.01.21 12:46, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > > > allocated for a secret memory mapping will cause severe fragmentation of
> > > > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > > > as a pool for small pages for secret memory mappings.
> > > > >
> > > > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > > > PMD-size pages.
> > > > >
> > > > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > > > page caches so that page allocator won't be surprised by failing attempt to
> > > > > migrate these pages.
> > > > >
> > > > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > > > parameter. This allows explicit control over the memory available for
> > > > > secretmem and provides upper hard limit for secretmem consumption.
> > > >
> > > > OK, so I have finally had a look at this closer and this is really not
> > > > acceptable. I have already mentioned that in a response to other patch
> > > > but any task is able to deprive access to secret memory to other tasks
> > > > and cause OOM killer which wouldn't really recover ever and potentially
> > > > panic the system. Now you could be less drastic and only make SIGBUS on
> > > > fault but that would be still quite terrible. There is a very good
> > > > reason why hugetlb implements is non-trivial reservation system to avoid
> > > > exactly these problems.
>
> So, if I understand your concerns correct this implementation has two
> issues:
> 1) allocation failure at page fault that causes unrecoverable OOM and
> 2) a possibility for an unprivileged user to deplete secretmem pool and
> cause (1) to others
>
> I'm not really familiar with OOM internals, but when I simulated an
> allocation failure in my testing only the allocating process and it's
> parent were OOM-killed and then the system continued normally.
If you kill the allocating process then yes, it would work, but your
process might be the very last to be selected.
> You are right, it would be better if we SIGBUS instead of OOM but I don't
> agree SIGBUS is terrible. As we started to draw parallels with hugetlbfs
> even despite it's complex reservation system, hugetlb_fault() may fail to
> allocate pages from CMA and this still will cause SIGBUS.
This is an unexpected runtime error. Unless you make it an integral part
of the API design.
> And hugetlb pools may be also depleted by anybody by calling
> mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
> secretmem has RLIMIT_MEMLOCK.
Yes it can fail. But it would fail at the mmap time when the reservation
fails. Not during the #PF time which can be at any time.
> That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
> secretmem at least as controllable and robust than hugeltbfs even without
> complex reservation at mmap() time.
Still sucks huge!
> > > > So unless I am really misreading the code
> > > > Nacked-by: Michal Hocko <mhocko@suse.com>
> > > >
> > > > That doesn't mean I reject the whole idea. There are some details to
> > > > sort out as mentioned elsewhere but you cannot really depend on
> > > > pre-allocated pool which can fail at a fault time like that.
> > >
> > > So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> > > mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> > > point in time the pages would get moved to the secretmem pool, and a
> > > mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> > > there are guarantees at fault time?
> >
> > yes, reserve at mmap time and use during the fault. But this all sounds
> > like a self inflicted problem to me. Sure you can have a pre-allocated
> > or more dynamic pool to reduce the direct mapping fragmentation but you
> > can always fall back to regular allocatios. In other ways have the pool
> > as an optimization rather than a hard requirement. With a careful access
> > control this sounds like a manageable solution to me.
>
> I'd really wish we had this discussion for earlier spins of this series,
> but since this didn't happen let's refresh the history a bit.
I am sorry but I am really fighting to find time to watch for all the
moving targets...
> One of the major pushbacks on the first RFC [1] of the concept was about
> the direct map fragmentation. I tried really hard to find data that shows
> what is the performance difference with different page sizes in the direct
> map and I didn't find anything.
>
> So presuming that large pages do provide advantage the first implementation
> of secretmem used PMD_ORDER allocations to amortise the effect of the
> direct map fragmentation and then handed out 4k pages at each fault. In
> addition there was an option to reserve a finite pool at boot time and
> limit secretmem allocations only to that pool.
>
> At some point David suggested to use CMA to improve overall flexibility
> [3], so I switched secretmem to use CMA.
>
> Now, with the data we have at hand (my benchmarks and Intel's report David
> mentioned) I'm even not sure this whole pooling even required.
I would still like to understand whether that data is actually
representative. With some underlying reasoning rather than I have run
these XYZ benchmarks and numbers do not look terrible.
> I like the idea to have a pool as an optimization rather than a hard
> requirement but I don't see why would it need a careful access control. As
> the direct map fragmentation is not necessarily degrades the performance
> (and even sometimes it actually improves it) and even then the degradation
> is small, trying a PMD_ORDER allocation for a pool and then falling back to
> 4K page may be just fine.
Well, as soon as this is a scarce resource then an access control seems
like a first thing to think of. Maybe it is not really necessary but
then this should be really justified.
I am also still not sure why this whole thing is not just a
ramdisk/ramfs which happens to unmap its pages from the direct
map. Wouldn't that be a much more easier model to work with? You would
get an access control for free as well.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2021-01-28 13:01 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-21 12:27 [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 02/11] mmap: make mlock_future_check() global Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-25 17:01 ` Michal Hocko
2021-01-25 21:36 ` Mike Rapoport
2021-01-26 7:16 ` Michal Hocko
2021-01-26 8:33 ` Mike Rapoport
2021-01-26 9:00 ` Michal Hocko
2021-01-26 9:20 ` Mike Rapoport
2021-01-26 9:49 ` Michal Hocko
2021-01-26 9:53 ` David Hildenbrand
2021-01-26 10:19 ` Michal Hocko
2021-01-26 9:20 ` Michal Hocko
2021-02-03 12:15 ` Michal Hocko
2021-02-04 11:34 ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2021-01-26 11:46 ` Michal Hocko
2021-01-26 11:56 ` David Hildenbrand
2021-01-26 12:08 ` Michal Hocko
2021-01-28 9:22 ` Mike Rapoport
2021-01-28 13:01 ` Michal Hocko [this message]
2021-01-28 13:28 ` Christoph Lameter
2021-01-28 13:49 ` Michal Hocko
2021-01-28 15:56 ` Christoph Lameter
2021-01-28 16:23 ` Michal Hocko
2021-01-28 15:28 ` James Bottomley
2021-01-29 7:03 ` Mike Rapoport
2021-01-28 21:05 ` James Bottomley
2021-01-29 7:53 ` Michal Hocko
2021-01-29 8:23 ` Michal Hocko
2021-02-01 16:56 ` James Bottomley
2021-02-02 9:35 ` Michal Hocko
2021-02-02 12:48 ` Mike Rapoport
2021-02-02 13:14 ` David Hildenbrand
2021-02-02 13:32 ` Michal Hocko
2021-02-02 14:12 ` David Hildenbrand
2021-02-02 14:22 ` Michal Hocko
2021-02-02 14:26 ` David Hildenbrand
2021-02-02 14:32 ` Michal Hocko
2021-02-02 14:34 ` David Hildenbrand
2021-02-02 18:15 ` Mike Rapoport
2021-02-02 18:55 ` James Bottomley
2021-02-03 12:09 ` Michal Hocko
2021-02-04 11:31 ` Mike Rapoport
2021-02-02 13:27 ` Michal Hocko
2021-02-02 19:10 ` Mike Rapoport
2021-02-03 9:12 ` Michal Hocko
2021-02-04 9:58 ` Mike Rapoport
2021-02-04 13:02 ` Michal Hocko
2021-01-29 7:21 ` Mike Rapoport
2021-01-29 8:51 ` Michal Hocko
2021-02-02 14:42 ` David Hildenbrand
2021-01-21 12:27 ` [PATCH v16 08/11] secretmem: add memcg accounting Mike Rapoport
2021-01-25 16:17 ` Matthew Wilcox
2021-01-25 17:18 ` Shakeel Butt
2021-01-25 21:35 ` Mike Rapoport
2021-01-28 15:07 ` Shakeel Butt
2021-01-25 16:54 ` Michal Hocko
2021-01-25 21:38 ` Mike Rapoport
2021-01-26 7:31 ` Michal Hocko
2021-01-26 8:56 ` Mike Rapoport
2021-01-26 9:15 ` Michal Hocko
2021-01-26 14:48 ` Matthew Wilcox
2021-01-26 15:05 ` Michal Hocko
2021-01-27 18:42 ` Roman Gushchin
2021-01-28 7:58 ` Michal Hocko
2021-01-28 14:05 ` Shakeel Butt
2021-01-28 14:22 ` Michal Hocko
2021-01-28 14:57 ` Shakeel Butt
2021-01-21 12:27 ` [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant Mike Rapoport
2021-01-25 18:18 ` Catalin Marinas
2021-01-21 12:27 ` [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2021-01-21 22:18 ` [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YBK1kqL7JA7NePBQ@dhcp22.suse.cz \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=cl@linux.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=elena.reshetova@intel.com \
--cc=guro@fb.com \
--cc=hagen@jauu.net \
--cc=hpa@zytor.com \
--cc=jejb@linux.ibm.com \
--cc=kirill@shutemov.name \
--cc=linux-api@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=linux-riscv@lists.infradead.org \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=mtk.manpages@gmail.com \
--cc=palmer@dabbelt.com \
--cc=palmerdabbelt@google.com \
--cc=paul.walmsley@sifive.com \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rppt@kernel.org \
--cc=rppt@linux.ibm.com \
--cc=shakeelb@google.com \
--cc=shuah@kernel.org \
--cc=tglx@linutronix.de \
--cc=tycho@tycho.ws \
--cc=viro@zeniv.linux.org.uk \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).