From: Matthew Wilcox <willy@infradead.org>
To: David Hildenbrand <david@redhat.com>
Cc: Khalid Aziz <khalid.aziz@oracle.com>,
"Longpeng (Mike,
Cloud Infrastructure Service Product Dept.)"
<longpeng2@huawei.com>,
Steven Sistare <steven.sistare@oracle.com>,
Anthony Yznaga <anthony.yznaga@oracle.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>
Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC
Date: Mon, 16 Aug 2021 14:32:16 +0100 [thread overview]
Message-ID: <YRpo4EAJSkY7hI7Q@casper.infradead.org> (raw)
In-Reply-To: <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com>
On Mon, Aug 16, 2021 at 03:24:38PM +0200, David Hildenbrand wrote:
> On 16.08.21 14:46, Matthew Wilcox wrote:
> > On Mon, Aug 16, 2021 at 02:20:43PM +0200, David Hildenbrand wrote:
> > > On 16.08.21 14:07, Matthew Wilcox wrote:
> > > > On Mon, Aug 16, 2021 at 10:02:22AM +0200, David Hildenbrand wrote:
> > > > > > Mappings within this address range behave as if they were shared
> > > > > > between threads, so a write to a MAP_PRIVATE mapping will create a
> > > > > > page which is shared between all the sharers. The first process that
> > > > > > declares an address range mshare'd can continue to map objects in the
> > > > > > shared area. All other processes that want mshare'd access to this
> > > > > > memory area can do so by calling mshare(). After this call, the
> > > > > > address range given by mshare becomes a shared range in its address
> > > > > > space. Anonymous mappings will be shared and not COWed.
> > > > >
> > > > > Did I understand correctly that you want to share actual page tables between
> > > > > processes and consequently different MMs? That sounds like a very bad idea.
> > > >
> > > > That is the entire point. Consider a machine with 10,000 instances
> > > > of an application running (process model, not thread model). If each
> > > > application wants to map 1TB of RAM using 2MB pages, that's 4MB of page
> > > > tables per process or 40GB of RAM for the whole machine.
> > >
> > > What speaks against 1 GB pages then?
> >
> > Until recently, the CPUs only having 4 1GB TLB entries. I'm sure we
> > still have customers using that generation of CPUs. 2MB pages perform
> > better than 1GB pages on the previous generation of hardware, and I
> > haven't seen numbers for the next generation yet.
>
> I read that somewhere else before, yet we have heavy 1 GiB page users,
> especially in the context of VMs and DPDK.
I wonder if those users actually benchmarked. Or whether the memory
savings worked out so well for them that the loss of TLB performance
didn't matter.
> So, it only works for hugetlbfs in case uffd is not in place (-> no
> per-process data in the page table) and we have an actual shared mappings.
> When unsharing, we zap the PUD entry, which will result in allocating a
> per-process page table on next fault.
I think uffd was a huge mistake. It should have been a filesystem
instead of a hack on the side of anonymous memory.
> I will rephrase my previous statement "hugetlbfs just doesn't raise these
> problems because we are special casing it all over the place already". For
> example, not allowing to swap such pages. Disallowing MADV_DONTNEED. Special
> hugetlbfs locking.
Sure, that's why I want to drag this feature out of "oh this is a
hugetlb special case" and into "this is something Linux supports".
next prev parent reply other threads:[~2021-08-16 13:58 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-27 17:11 [RFC PATCH 0/5] madvise MADV_DOEXEC Anthony Yznaga
2020-07-27 17:07 ` Eric W. Biederman
2020-07-27 18:00 ` Steven Sistare
2020-07-28 13:40 ` Christian Brauner
2020-07-27 17:11 ` [RFC PATCH 1/5] elf: reintroduce using MAP_FIXED_NOREPLACE for elf executable mappings Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 2/5] mm: do not assume only the stack vma exists in setup_arg_pages() Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 3/5] mm: introduce VM_EXEC_KEEP Anthony Yznaga
2020-07-28 13:38 ` Eric W. Biederman
2020-07-28 17:44 ` Anthony Yznaga
2020-07-29 13:52 ` Kirill A. Shutemov
2020-07-29 23:20 ` Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 4/5] exec, elf: require opt-in for accepting preserved mem Anthony Yznaga
2020-07-27 17:11 ` [RFC PATCH 5/5] mm: introduce MADV_DOEXEC Anthony Yznaga
2020-07-28 13:22 ` Kirill Tkhai
2020-07-28 14:06 ` Steven Sistare
2020-07-28 11:34 ` [RFC PATCH 0/5] madvise MADV_DOEXEC Kirill Tkhai
2020-07-28 17:28 ` Anthony Yznaga
2020-07-28 14:23 ` Andy Lutomirski
2020-07-28 14:30 ` Steven Sistare
2020-07-30 15:22 ` Matthew Wilcox
2020-07-30 15:27 ` Christian Brauner
2020-07-30 15:34 ` Matthew Wilcox
2020-07-30 15:54 ` Christian Brauner
2020-07-31 9:12 ` Stefan Hajnoczi
2020-07-30 15:59 ` Steven Sistare
2020-07-30 17:12 ` Matthew Wilcox
2020-07-30 17:35 ` Steven Sistare
2020-07-30 17:49 ` Matthew Wilcox
2020-07-30 18:27 ` Steven Sistare
2020-07-30 21:58 ` Eric W. Biederman
2020-07-31 14:57 ` Steven Sistare
2020-07-31 15:27 ` Matthew Wilcox
2020-07-31 16:11 ` Steven Sistare
2020-07-31 16:56 ` Jason Gunthorpe
2020-07-31 17:15 ` Steven Sistare
2020-07-31 17:48 ` Jason Gunthorpe
2020-07-31 17:55 ` Steven Sistare
2020-07-31 17:23 ` Matthew Wilcox
2020-08-03 15:28 ` Eric W. Biederman
2020-08-03 15:42 ` James Bottomley
2020-08-03 20:03 ` Steven Sistare
[not found] ` <9371b8272fd84280ae40b409b260bab3@AcuMS.aculab.com>
2020-08-04 11:13 ` Matthew Wilcox
2020-08-03 19:29 ` Steven Sistare
2020-07-31 19:41 ` Steven Sistare
2021-07-08 9:52 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-08 12:48 ` Steven Sistare
2021-07-12 1:05 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-07-12 1:30 ` Matthew Wilcox
2021-07-13 0:57 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-13 19:49 ` Khalid Aziz
2021-08-14 20:07 ` David Laight
2021-08-16 0:26 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:07 ` David Laight
2021-08-16 6:54 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-16 8:02 ` David Hildenbrand
2021-08-16 12:07 ` Matthew Wilcox
2021-08-16 12:20 ` David Hildenbrand
2021-08-16 12:42 ` David Hildenbrand
2021-08-16 12:46 ` Matthew Wilcox
2021-08-16 13:24 ` David Hildenbrand
2021-08-16 13:32 ` Matthew Wilcox [this message]
2021-08-16 14:10 ` David Hildenbrand
2021-08-16 14:27 ` Matthew Wilcox
2021-08-16 14:33 ` David Hildenbrand
2021-08-16 14:40 ` Matthew Wilcox
2021-08-16 15:01 ` David Hildenbrand
2021-08-16 15:59 ` Matthew Wilcox
2021-08-16 16:06 ` Khalid Aziz
2021-08-16 16:15 ` Matthew Wilcox
2021-08-16 16:13 ` David Hildenbrand
2021-08-16 12:27 ` [private] " David Hildenbrand
2021-08-16 12:30 ` David Hildenbrand
2021-08-17 0:47 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-08-17 0:55 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YRpo4EAJSkY7hI7Q@casper.infradead.org \
--to=willy@infradead.org \
--cc=anthony.yznaga@oracle.com \
--cc=arei.gonglei@huawei.com \
--cc=david@redhat.com \
--cc=khalid.aziz@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longpeng2@huawei.com \
--cc=steven.sistare@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).