From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A3E1C4338F for ; Mon, 16 Aug 2021 16:02:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84DBC60F4B for ; Mon, 16 Aug 2021 16:02:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 84DBC60F4B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D1F296B006C; Mon, 16 Aug 2021 12:02:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CCCB46B0072; Mon, 16 Aug 2021 12:02:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBC226B0073; Mon, 16 Aug 2021 12:02:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id A0CF76B006C for ; Mon, 16 Aug 2021 12:02:07 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 425F81DE87 for ; Mon, 16 Aug 2021 16:02:07 +0000 (UTC) X-FDA: 78481410294.31.54B54EE Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id E58F9B009FB5 for ; Mon, 16 Aug 2021 16:02:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=fBMlb0ygdUm6+NM8jGsYKG5X9oTlxwjPGdYLupXccPE=; b=CGcgxXHC5rcTWVny6Qzeqxu0oV E9F+/tS3acpEPqqAl30GquJ9tOy9TC0PtVwspw0H70496Gq/DQ3fneIkBQdvAcMvxX1LjIR0ILd/E HfGlKTzSQOmSRrvhEmL9cZLyQH/omRfb7PoWwkgWRLeOgwymPIn96NoAazUt10EmE1gA2IMQZJCXZ Uwvp6pi1QbRZ3ppZP+RHXh3nYHtSIsVuWn7ejgns8wNkEOIo4cbTchRDbqC62ix99jpDlTYsPDXbG 6owMrk2whKlWS1leOk60khUhxkh403iArY/lL3c2mkb+3KyCrJqIoiTYtsDGA1O7TFNfHYm/fGpcA 7zJL0rcQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mFf1f-001Y0U-Ms; Mon, 16 Aug 2021 16:00:16 +0000 Date: Mon, 16 Aug 2021 16:59:47 +0100 From: Matthew Wilcox To: David Hildenbrand Cc: Khalid Aziz , "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" , Steven Sistare , Anthony Yznaga , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "Gonglei (Arei)" Subject: Re: [RFC PATCH 0/5] madvise MADV_DOEXEC Message-ID: References: <88884f55-4991-11a9-d330-5d1ed9d5e688@redhat.com> <40bad572-501d-e4cf-80e3-9a8daa98dc7e@redhat.com> <3ce1f52f-d84d-49ba-c027-058266e16d81@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=CGcgxXHC; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E58F9B009FB5 X-Stat-Signature: yogumhpytfziucn4ky8sjk6tjbrtoife X-HE-Tag: 1629129724-592663 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Aug 16, 2021 at 05:01:44PM +0200, David Hildenbrand wrote: > On 16.08.21 16:40, Matthew Wilcox wrote: > > On Mon, Aug 16, 2021 at 04:33:09PM +0200, David Hildenbrand wrote: > > > > > I did not follow why we have to play games with MAP_PRIVATE, and having > > > > > private anonymous pages shared between processes that don't COW, introducing > > > > > new syscalls etc. > > > > > > > > It's not about SHMEM, it's about file-backed pages on regular > > > > filesystems. I don't want to have XFS, ext4 and btrfs all with their > > > > own implementations of ARCH_WANT_HUGE_PMD_SHARE. > > > > > > Let me ask this way: why do we have to play such games with MAP_PRIVATE? > > > > : Mappings within this address range behave as if they were shared > > : between threads, so a write to a MAP_PRIVATE mapping will create a > > : page which is shared between all the sharers. > > > > If so, that's a misunderstanding, because there are no games being played. > > What Khalid's saying there is that because the page tables are already > > shared for that range of address space, the COW of a MAP_PRIVATE will > > create a new page, but that page will be shared between all the sharers. > > The second write to a MAP_PRIVATE page (by any of the sharers) will not > > create a COW situation. Just like if all the sharers were threads of > > the same process. > > > > It actually seems to be just like I understood it. We'll have multiple > processes share anonymous pages writable, even though they are not using > shared memory. > > IMHO, sharing page tables to optimize for something kernel-internal (page > table consumption) should be completely transparent to user space. Just like > ARCH_WANT_HUGE_PMD_SHARE currently is unless I am missing something > important. > > The VM_MAYSHARE check in want_pmd_share()->vma_shareable() makes me assume > that we really only optimize for MAP_SHARED right now, never for > MAP_PRIVATE. It's definitely *not* about being transparent to userspace. It's about giving userspace new functionality where multiple processes can choose to share a portion of their address space with each other. What any process changes in that range changes, every sharing process sees. mmap(), munmap(), mprotect(), mremap(), everything.