From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D88AAC433DF for ; Mon, 3 Aug 2020 03:28:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7609A206F6 for ; Mon, 3 Aug 2020 03:28:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="T3YaTWWL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7609A206F6 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F6E38D00D3; Sun, 2 Aug 2020 23:28:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A6A48D00AA; Sun, 2 Aug 2020 23:28:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2953B8D00D3; Sun, 2 Aug 2020 23:28:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0079.hostedemail.com [216.40.44.79]) by kanga.kvack.org (Postfix) with ESMTP id 131A48D00AA for ; Sun, 2 Aug 2020 23:28:12 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 88BD43620 for ; Mon, 3 Aug 2020 03:28:11 +0000 (UTC) X-FDA: 77107823982.15.coat84_420e9a426f9a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 5772D1814B0C8 for ; Mon, 3 Aug 2020 03:28:11 +0000 (UTC) X-HE-Tag: coat84_420e9a426f9a X-Filterd-Recvd-Size: 5119 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Mon, 3 Aug 2020 03:28:10 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 02 Aug 2020 20:27:55 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Sun, 02 Aug 2020 20:28:09 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Sun, 02 Aug 2020 20:28:09 -0700 Received: from [10.2.58.124] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 3 Aug 2020 03:28:08 +0000 Subject: Re: [PATCH] mm: introduce reference pages To: Peter Collingbourne , Andrew Morton , Catalin Marinas CC: Evgenii Stepanov , , References: <20200731203241.50427-1-pcc@google.com> From: John Hubbard Message-ID: <92fa4a71-d8dc-0f07-832c-cbceca43e537@nvidia.com> Date: Sun, 2 Aug 2020 20:28:08 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200731203241.50427-1-pcc@google.com> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1596425275; bh=to3PWnCaeBvNqUxrIlIx1/WQ3rw7XROT9vzswJSETpU=; h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date: User-Agent:MIME-Version:In-Reply-To:X-Originating-IP: X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=T3YaTWWLp1kixcjQgNOXuUZbtvEwm9z/lZE5GMolMF9tlKN8ut0SQe+6QIGAtbnwG xr1RxDwiv6ToFL4Fw1sxXcuL5PqbYC97Pk2j2/GM7fb9eK8RI7wCeClmODnB7Z+ny4 3m38u0Fadd1erRB+BQGve+s+yw5i3STBvegY/xfwaxTpdxuEDKuTwza1pNYZlaGkao X0XsHlC0oDHIo78gE2xR9SmwTln0fBNaJvnB7Z1dyx1ygJX3OTZJ/Z5hH/B0kfnFjj W83uksDqqCxTijJaCJwPgPMI61Y0i3iO0BUjJPOoS7cxu6ujl3hkUQBZJiAeE+IHyN ftkIB9rtJ/G5g== X-Rspamd-Queue-Id: 5772D1814B0C8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 7/31/20 1:32 PM, Peter Collingbourne wrote: ... Hi, I can see why you want to do this. A few points to consider, below. btw, the patch would *not* apply for me, via `git am`. I finally used patch(1) and that worked. Probably good to mention which tree and branch this applies to, as a first step to avoiding that, but I'm not quite sure what else went wrong. Maybe use stock git, instead of 2.28.0.163.g6104cc2f0b6-goog? Just guessing. > @@ -1684,9 +1695,33 @@ static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags) > return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; > } > > +static vm_fault_t refpage_fault(struct vm_fault *vmf) > +{ > + struct page *page; > + > + if (get_user_pages((unsigned long)vmf->vma->vm_private_data, 1, 0, > + &page, 0) != 1) > + return VM_FAULT_SIGSEGV; > + This will end up overflowing the page->_refcount in some situations. Some thoughts: In order to implement this feature, the reference pages need to be made at least a little bit more special, and probably little bit more like zero pages. At one extreme, for example, zero pages could be a special case of reference pages, although I'm not sure of a clean way to implement that. The reason that more special-ness is required, is that things such as reference counting and locking can be special-cased with zero pages. Doing so allows avoiding page->_refcount overflows, for example. Your patch here, however, allows normal pages to be treated *almost* like a zero page, in that it's a page full of constant value data. But because a refpage can be any page, not just a special one that is defined at a single location, that leads to problems with refcounts. > + vmf->page = page; > + return VM_FAULT_LOCKED; Is the page really locked, or is this a case of "the page is special and we can safely claim it is locked"? Maybe I'm just confused about the use of VM_FAULT_LOCKED: I thought you only should set it after locking the page. > +} > + > +static void refpage_close(struct vm_area_struct *vma) > +{ > + /* This function exists only to prevent is_mergeable_vma from allowing a > + * reference page mapping to be merged with an anonymous mapping. > + */ While it is true that implementing a vma's .close() method will prevent vma merging, this is an abuse of that function: it depends on how that function is implemented. And given that refpages represent significant new capability, I think they deserve their own "if" clause (and perhaps a VMA flag) in is_mergeable_vma(), instead of this kind of minor hack. thanks, -- John Hubbard NVIDIA