From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3E00C4338F for ; Mon, 16 Aug 2021 16:16:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7848B60F58 for ; Mon, 16 Aug 2021 16:16:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7848B60F58 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 03A846B006C; Mon, 16 Aug 2021 12:16:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F2D9F8D0001; Mon, 16 Aug 2021 12:16:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1B806B0073; Mon, 16 Aug 2021 12:16:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id C543F6B006C for ; Mon, 16 Aug 2021 12:16:48 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5C17F182C370A for ; Mon, 16 Aug 2021 16:16:48 +0000 (UTC) X-FDA: 78481447296.29.64BE30C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id AE6E31007537 for ; Mon, 16 Aug 2021 16:16:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=YguTfFom6rn+j6nmlHG2OKvNvZ6Byw2cwNgw3ARdK2k=; b=Vj0ALVI+sLctiA1QZwhqdtT95t yO0jY03JRMYYK8ciIMIddB6YZFOaVLCRkh/WnFel9+Ht+UhLnsWEh11ryxhRMczP1SjtLnj7aDsv/ thJHJGLTW9ujfk92LmQe/WpwKxJQ+md6Xathh0GJM2xm07yzXh8KIa1r59wu4OOHngt+V0FqlaDa2 efgoc8FkHyg3PYUNBLU7XgA/QrEiiwA6fQaCzJaczYkShzZ167ruf8LihwbWKkHPlGpbZsrOVELgq /zz+O4uv/EzLW5UT9Qa/xq3Q93bWKL3AIq6BO8/27vfZ2mx7fjg2af3PNRrN6s2W8d0JUkaf4BekS nKIG1q+w==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFfGx-001ZQe-AL; Mon, 16 Aug 2021 16:15:47 +0000 Date: Mon, 16 Aug 2021 17:15:35 +0100 From: Matthew Wilcox To: Khalid Aziz Cc: David Hildenbrand , "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" , Steven Sistare , Anthony Yznaga , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "Gonglei (Arei)" Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC Message-ID: References: <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com> <40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com> <3ce1f52f-d84d-49ba-c027-058266e16d81@redhat.com> <25d15c74-40e2-8ec3-5232-ab945f653580@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <25d15c74-40e2-8ec3-5232-ab945f653580@oracle.com> Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Vj0ALVI+; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: AE6E31007537 X-Stat-Signature: d779bh6teyez6nbuoogfhk6mrbhepmoo X-HE-Tag: 1629130607-510461 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 16, 2021 at 10:06:47AM -0600, Khalid Aziz wrote: > On 8/16/21 9:59 AM, Matthew Wilcox wrote: > > On Mon, Aug 16, 2021 at 05:01:44PM +0200, David Hildenbrand wrote: > > > On 16.08.21 16:40, Matthew Wilcox wrote: > > > > On Mon, Aug 16, 2021 at 04:33:09PM +0200, David Hildenbrand wrote: > > > > > > > I did not follow why we have to play games with MAP_PRIVATE, and having > > > > > > > private anonymous pages shared between processes that don't COW, introducing > > > > > > > new syscalls etc. > > > > > > > > > > > > It's not about SHMEM, it's about file-backed pages on regular > > > > > > filesystems. I don't want to have XFS, ext4 and btrfs all with their > > > > > > own implementations of ARCH_WANT_HUGE_PMD_SHARE. > > > > > > > > > > Let me ask this way: why do we have to play such games with MAP_PRIVATE? > > > > > > > > : Mappings within this address range behave as if they were shared > > > > : between threads, so a write to a MAP_PRIVATE mapping will create a > > > > : page which is shared between all the sharers. > > > > > > > > If so, that's a misunderstanding, because there are no games being played. > > > > What Khalid's saying there is that because the page tables are already > > > > shared for that range of address space, the COW of a MAP_PRIVATE will > > > > create a new page, but that page will be shared between all the sharers. > > > > The second write to a MAP_PRIVATE page (by any of the sharers) will not > > > > create a COW situation. Just like if all the sharers were threads of > > > > the same process. > > > > > > > > > > It actually seems to be just like I understood it. We'll have multiple > > > processes share anonymous pages writable, even though they are not using > > > shared memory. > > > > > > IMHO, sharing page tables to optimize for something kernel-internal (page > > > table consumption) should be completely transparent to user space. Just like > > > ARCH_WANT_HUGE_PMD_SHARE currently is unless I am missing something > > > important. > > > > > > The VM_MAYSHARE check in want_pmd_share()->vma_shareable() makes me assume > > > that we really only optimize for MAP_SHARED right now, never for > > > MAP_PRIVATE. > > > > It's definitely *not* about being transparent to userspace. It's about > > giving userspace new functionality where multiple processes can choose > > to share a portion of their address space with each other. What any > > process changes in that range changes, every sharing process sees. > > mmap(), munmap(), mprotect(), mremap(), everything. > > > > Exactly and to further elaborate, once a process calls mshare() to declare > its intent to share PTEs for a range of address and another process accepts > that sharing by calling mshare() itself, the two (or more) processes have > agreed to share PTEs for that entire address range. A MAP_PRIVATE mapping in > this address range goes against the original intent of sharing and what we > are saying is the original intent of sharing takes precedence in case of > this conflict. I don't know that it's against the original intent ... I think MAP_PRIVATE in this context means "Private to this process and every process sharing this chunk of address space". So a store doesn't go through to the page cache, as it would with MAP_SHARED, but it is visible to the other processes sharing these page tables.