From: Anthony Yznaga <anthony.yznaga@oracle.com>
To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, linux-arch@vger.kernel.org
Cc: mhocko@kernel.org, tglx@linutronix.de, mingo@redhat.com,
bp@alien8.de, x86@kernel.org, hpa@zytor.com,
viro@zeniv.linux.org.uk, akpm@linux-foundation.org,
arnd@arndb.de, ebiederm@xmission.com, keescook@chromium.org,
gerg@linux-m68k.org, ktkhai@virtuozzo.com,
christian.brauner@ubuntu.com, peterz@infradead.org,
esyr@redhat.com, jgg@ziepe.ca, christian@kellner.me,
areber@redhat.com, cyphar@cyphar.com, steven.sistare@oracle.com
Subject: [RFC PATCH 1/5] elf: reintroduce using MAP_FIXED_NOREPLACE for elf executable mappings
Date: Mon, 27 Jul 2020 10:11:23 -0700 [thread overview]
Message-ID: <1595869887-23307-2-git-send-email-anthony.yznaga@oracle.com> (raw)
In-Reply-To: <1595869887-23307-1-git-send-email-anthony.yznaga@oracle.com>
Commit b212921b13bd ("elf: don't use MAP_FIXED_NOREPLACE for elf
executable mappings") reverted back to using MAP_FIXED to map elf load
segments because it was found that the load segments in some binaries
overlap and can cause MAP_FIXED_NOREPLACE to fail. The original intent
of MAP_FIXED_NOREPLACE was to prevent the silent clobbering of an
existing mapping (e.g. the stack) by the elf image. To achieve this,
expand on the logic used when loading ET_DYN binaries which calculates a
total size for the image when the first segment is mapped, maps the
entire image, and then unmaps the remainder before remaining segments
are mapped. Apply this to ET_EXEC binaries as well as ET_DYN binaries
as is done now, and for both ET_EXEC and ET_DYN+INTERP use
MAP_FIXED_NOREPLACE for the initial total size mapping and MAP_FIXED for
remaining mappings. For ET_DYN w/out INTERP, continue to map at a
system-selected address in the mmap region.
Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
---
fs/binfmt_elf.c | 112 ++++++++++++++++++++++++++++++++------------------------
1 file changed, 64 insertions(+), 48 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 9fe3b51c116a..6445a6dbdb1d 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1046,58 +1046,25 @@ static int load_elf_binary(struct linux_binprm *bprm)
vaddr = elf_ppnt->p_vaddr;
/*
- * If we are loading ET_EXEC or we have already performed
- * the ET_DYN load_addr calculations, proceed normally.
+ * Map remaining segments with MAP_FIXED once the first
+ * total size mapping has been done.
*/
- if (elf_ex->e_type == ET_EXEC || load_addr_set) {
+ if (load_addr_set) {
elf_flags |= MAP_FIXED;
- } else if (elf_ex->e_type == ET_DYN) {
- /*
- * This logic is run once for the first LOAD Program
- * Header for ET_DYN binaries to calculate the
- * randomization (load_bias) for all the LOAD
- * Program Headers, and to calculate the entire
- * size of the ELF mapping (total_size). (Note that
- * load_addr_set is set to true later once the
- * initial mapping is performed.)
- *
- * There are effectively two types of ET_DYN
- * binaries: programs (i.e. PIE: ET_DYN with INTERP)
- * and loaders (ET_DYN without INTERP, since they
- * _are_ the ELF interpreter). The loaders must
- * be loaded away from programs since the program
- * may otherwise collide with the loader (especially
- * for ET_EXEC which does not have a randomized
- * position). For example to handle invocations of
- * "./ld.so someprog" to test out a new version of
- * the loader, the subsequent program that the
- * loader loads must avoid the loader itself, so
- * they cannot share the same load range. Sufficient
- * room for the brk must be allocated with the
- * loader as well, since brk must be available with
- * the loader.
- *
- * Therefore, programs are loaded offset from
- * ELF_ET_DYN_BASE and loaders are loaded into the
- * independently randomized mmap region (0 load_bias
- * without MAP_FIXED).
- */
- if (interpreter) {
- load_bias = ELF_ET_DYN_BASE;
- if (current->flags & PF_RANDOMIZE)
- load_bias += arch_mmap_rnd();
- elf_flags |= MAP_FIXED;
- } else
- load_bias = 0;
-
+ } else {
/*
- * Since load_bias is used for all subsequent loading
- * calculations, we must lower it by the first vaddr
- * so that the remaining calculations based on the
- * ELF vaddrs will be correctly offset. The result
- * is then page aligned.
+ * To ensure loading does not continue if an ELF
+ * LOAD segment overlaps an existing mapping (e.g.
+ * the stack), for the first LOAD Program Header
+ * calculate the the entire size of the ELF mapping
+ * and map it with MAP_FIXED_NOREPLACE. On success,
+ * the remainder will be unmapped and subsequent
+ * LOAD segments mapped with MAP_FIXED rather than
+ * MAP_FIXED_NOREPLACE because some binaries may
+ * have overlapping segments that would cause the
+ * mmap to fail.
*/
- load_bias = ELF_PAGESTART(load_bias - vaddr);
+ elf_flags |= MAP_FIXED_NOREPLACE;
total_size = total_mapping_size(elf_phdata,
elf_ex->e_phnum);
@@ -1105,6 +1072,55 @@ static int load_elf_binary(struct linux_binprm *bprm)
retval = -EINVAL;
goto out_free_dentry;
}
+
+ if (elf_ex->e_type == ET_DYN) {
+ /*
+ * This logic is run once for the first LOAD
+ * Program Header for ET_DYN binaries to
+ * calculate the randomization (load_bias) for
+ * all the LOAD Program Headers.
+ *
+ * There are effectively two types of ET_DYN
+ * binaries: programs (i.e. PIE: ET_DYN with
+ * INTERP) and loaders (ET_DYN without INTERP,
+ * since they _are_ the ELF interpreter). The
+ * loaders must be loaded away from programs
+ * since the program may otherwise collide with
+ * the loader (especially for ET_EXEC which does
+ * not have a randomized position). For example
+ * to handle invocations of "./ld.so someprog"
+ * to test out a new version of the loader, the
+ * subsequent program that the loader loads must
+ * avoid the loader itself, so they cannot share
+ * the same load range. Sufficient room for the
+ * brk must be allocated with the loader as
+ * well, since brk must be available with the
+ * loader.
+ *
+ * Therefore, programs are loaded offset from
+ * ELF_ET_DYN_BASE and loaders are loaded into
+ * the independently randomized mmap region
+ * (0 load_bias without MAP_FIXED*).
+ */
+ if (interpreter) {
+ load_bias = ELF_ET_DYN_BASE;
+ if (current->flags & PF_RANDOMIZE)
+ load_bias += arch_mmap_rnd();
+ } else {
+ load_bias = 0;
+ elf_flags &= ~MAP_FIXED_NOREPLACE;
+ }
+
+ /*
+ * Since load_bias is used for all subsequent
+ * loading calculations, we must lower it by
+ * the first vaddr so that the remaining
+ * calculations based on the ELF vaddrs will
+ * be correctly offset. The result is then
+ * page aligned.
+ */
+ load_bias = ELF_PAGESTART(load_bias - vaddr);
+ }
}
error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,
--
1.8.3.1
next prev parent reply other threads:[~2020-07-27 17:03 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-27 17:11 [RFC PATCH 0/5] madvise MADV_DOEXEC Anthony Yznaga
2020-07-27 17:07 ` Eric W. Biederman
2020-07-27 18:00 ` Steven Sistare
2020-07-28 13:40 ` Christian Brauner
2020-07-27 17:11 ` Anthony Yznaga [this message]
2020-07-27 17:11 ` [RFC PATCH 2/5] mm: do not assume only the stack vma exists in setup_arg_pages() Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 3/5] mm: introduce VM_EXEC_KEEP Anthony Yznaga
2020-07-28 13:38 ` Eric W. Biederman
2020-07-28 17:44 ` Anthony Yznaga
2020-07-29 13:52 ` Kirill A. Shutemov
2020-07-29 23:20 ` Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 4/5] exec, elf: require opt-in for accepting preserved mem Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 5/5] mm: introduce MADV_DOEXEC Anthony Yznaga
2020-07-28 13:22 ` Kirill Tkhai
2020-07-28 14:06 ` Steven Sistare
2020-07-28 11:34 ` [RFC PATCH 0/5] madvise MADV_DOEXEC Kirill Tkhai
2020-07-28 17:28 ` Anthony Yznaga
2020-07-28 14:23 ` Andy Lutomirski
2020-07-28 14:30 ` Steven Sistare
2020-07-30 15:22 ` Matthew Wilcox
2020-07-30 15:27 ` Christian Brauner
2020-07-30 15:34 ` Matthew Wilcox
2020-07-30 15:54 ` Christian Brauner
2020-07-31 9:12 ` Stefan Hajnoczi
2020-07-30 15:59 ` Steven Sistare
2020-07-30 17:12 ` Matthew Wilcox
2020-07-30 17:35 ` Steven Sistare
2020-07-30 17:49 ` Matthew Wilcox
2020-07-30 18:27 ` Steven Sistare
2020-07-30 21:58 ` Eric W. Biederman
2020-07-31 14:57 ` Steven Sistare
2020-07-31 15:27 ` Matthew Wilcox
2020-07-31 16:11 ` Steven Sistare
2020-07-31 16:56 ` Jason Gunthorpe
2020-07-31 17:15 ` Steven Sistare
2020-07-31 17:48 ` Jason Gunthorpe
2020-07-31 17:55 ` Steven Sistare
2020-07-31 17:23 ` Matthew Wilcox
2020-08-03 15:28 ` Eric W. Biederman
2020-08-03 15:42 ` James Bottomley
2020-08-03 20:03 ` Steven Sistare
[not found] ` <9371b8272fd84280ae40b409b260bab3@AcuMS.aculab.com>
2020-08-04 11:13 ` Matthew Wilcox
2020-08-03 19:29 ` Steven Sistare
2020-07-31 19:41 ` Steven Sistare
2021-07-08 9:52 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-08 12:48 ` Steven Sistare
2021-07-12 1:05 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-12 1:30 ` Matthew Wilcox
2021-07-13 0:57 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-13 19:49 ` Khalid Aziz
2021-08-14 20:07 ` David Laight
2021-08-16 0:26 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:07 ` David Laight
2021-08-16 6:54 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:02 ` David Hildenbrand
2021-08-16 12:07 ` Matthew Wilcox
2021-08-16 12:20 ` David Hildenbrand
2021-08-16 12:42 ` David Hildenbrand
2021-08-16 12:46 ` Matthew Wilcox
2021-08-16 13:24 ` David Hildenbrand
2021-08-16 13:32 ` Matthew Wilcox
2021-08-16 14:10 ` David Hildenbrand
2021-08-16 14:27 ` Matthew Wilcox
2021-08-16 14:33 ` David Hildenbrand
2021-08-16 14:40 ` Matthew Wilcox
2021-08-16 15:01 ` David Hildenbrand
2021-08-16 15:59 ` Matthew Wilcox
2021-08-16 16:06 ` Khalid Aziz
2021-08-16 16:15 ` Matthew Wilcox
2021-08-16 16:13 ` David Hildenbrand
2021-08-16 12:27 ` [private] " David Hildenbrand
2021-08-16 12:30 ` David Hildenbrand
2021-08-17 0:47 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-17 0:55 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1595869887-23307-2-git-send-email-anthony.yznaga@oracle.com \
--to=anthony.yznaga@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=areber@redhat.com \
--cc=arnd@arndb.de \
--cc=bp@alien8.de \
--cc=christian.brauner@ubuntu.com \
--cc=christian@kellner.me \
--cc=cyphar@cyphar.com \
--cc=ebiederm@xmission.com \
--cc=esyr@redhat.com \
--cc=gerg@linux-m68k.org \
--cc=hpa@zytor.com \
--cc=jgg@ziepe.ca \
--cc=keescook@chromium.org \
--cc=ktkhai@virtuozzo.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=steven.sistare@oracle.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).