From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 560FCC433F5 for ; Fri, 21 Jan 2022 14:47:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 78B116B007E; Fri, 21 Jan 2022 09:47:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 73B316B0080; Fri, 21 Jan 2022 09:47:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 650FC6B0081; Fri, 21 Jan 2022 09:47:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id 56E686B007E for ; Fri, 21 Jan 2022 09:47:34 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DD0F5918B5 for ; Fri, 21 Jan 2022 14:47:33 +0000 (UTC) X-FDA: 79054572786.24.EF86AE2 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 646B54000E for ; Fri, 21 Jan 2022 14:47:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=/4JfptqWCgo5zviDV0QcdmoidVe9JypSlmVm9x6vVrg=; b=AOblsX4w9Ofte1v2gveaP3mahu JmdSwluekwq94Dgnj/LY1vKDTTfbn75KfJjaZqTtf5uo4NvVVkrZjZzDVOYmWKhXcZxd3yoi809qw VvhIdmFRrlBWqQ6zkbMAg98eRUC4fTTotILHV+W95sTv2OJbsFngMvLC6yu71c72nF8Jb0WvGO0L4 7yh2nBAgc6SnzV2KsNC9HBv05PyU8/v1iYRwvkYD6CfnZK6okx2xfFJw5/SFofzHNDqU3tzmoDb9p HYHIpWrZqVNic1LvI2HED5x4DoMKm/R5Z31ZrrRZHPKHB4isD/Z90A91oOliryTlfVEoK1OPgTsLc 9+xeL7dA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nAvC5-00FfsH-6F; Fri, 21 Jan 2022 14:47:13 +0000 Date: Fri, 21 Jan 2022 14:47:13 +0000 From: Matthew Wilcox To: Barry Song <21cnbao@gmail.com> Cc: khalid.aziz@oracle.com, Andrew Morton , Arnd Bergmann , Dave Hansen , David Hildenbrand , LKML , Linux-MM , longpeng2@huawei.com, Mike Rapoport , Suren Baghdasaryan Subject: Re: [RFC PATCH 0/6] Add support for shared PTEs across processes Message-ID: References: <20220121010806.5607-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: wdp39oc4sjymna76e881e3iypuyqoiy5 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=AOblsX4w; dmarc=none; spf=none (imf27.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 646B54000E X-HE-Tag: 1642776453-911034 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jan 21, 2022 at 08:35:17PM +1300, Barry Song wrote: > On Fri, Jan 21, 2022 at 3:13 PM Matthew Wilcox wrote: > > On Fri, Jan 21, 2022 at 09:08:06AM +0800, Barry Song wrote: > > > > A file under /sys/fs/mshare can be opened and read from. A read from > > > > this file returns two long values - (1) starting address, and (2) > > > > size of the mshare'd region. > > > > > > > > -- > > > > int mshare_unlink(char *name) > > > > > > > > A shared address range created by mshare() can be destroyed using > > > > mshare_unlink() which removes the shared named object. Once all > > > > processes have unmapped the shared object, the shared address range > > > > references are de-allocated and destroyed. > > > > > > > mshare_unlink() returns 0 on success or -1 on error. > > > > > > I am still struggling with the user scenarios of these new APIs. This patch > > > supposes multiple processes will have same virtual address for the shared > > > area? How can this be guaranteed while different processes can map different > > > stack, heap, libraries, files? > > > > The two processes choose to share a chunk of their address space. > > They can map anything they like in that shared area, and then also > > anything they like in the areas that aren't shared. They can choose > > for that shared area to have the same address in both processes > > or different locations in each process. > > > > If two processes want to put a shared library in that shared address > > space, that should work. They probably would need to agree to use > > the same virtual address for the shared page tables for that to work. > > we are depending on an elf loader and ld to map the library > dynamically , so hardly > can we find a chance in users' code to call mshare() to map libraries > in application > level? If somebody wants to modify ld.so to take advantage of mshare(), they could. That wasn't our primary motivation here, so if it turns out to not work for that usecase, well, that's a shame. > > Think of this like hugetlbfs, only instead of sharing hugetlbfs > > memory, you can share _anything_ that's mmapable. > > yep, we can call mshare() on any kind of memory. for example, if multiple > processes use SYSV shmem, posix shmem or mmap the same file. but > it seems it is more sensible to let kernel do it automatically rather than > depending on calling mshare() from users? It is difficult for users to > decide which areas should be applied mshare(). users might want to call > mshare() for all shared areas to save memory coming from duplicated PTEs? > unlike SYSV shmem and POSIX shmem which are a feature for inter-processes > communications, mshare() looks not like a feature for applications, > but like a feature > for the whole system level? why would applications have to call something which > doesn't directly help them? without mshare(), those applications > will still work without any problem, right? is there anything in > mshare() which is > a must-have for applications? or mshare() is only a suggestion from applications > like madvise()? Our use case is that we have some very large files stored on persistent memory which we want to mmap in thousands of processes. So the first one shares a chunk of its address space and mmaps all the files into that chunk of address space. Subsequent processes find that a suitable address space already exists and use it, sharing the page tables and avoiding the calls to mmap. Sharing page tables is akin to running multiple threads in a single address space; except that only part of the address space is the same. There does need to be a certain amount of trust between the processes sharing the address space. You don't want to do it to an unsuspecting process.