From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12D40C433DF for ; Mon, 24 Aug 2020 08:41:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0A0A2074D for ; Mon, 24 Aug 2020 08:41:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0A0A2074D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=virtuozzo.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 569676B0005; Mon, 24 Aug 2020 04:41:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 51B1C6B0007; Mon, 24 Aug 2020 04:41:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 430E96B0008; Mon, 24 Aug 2020 04:41:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0141.hostedemail.com [216.40.44.141]) by kanga.kvack.org (Postfix) with ESMTP id 2C8196B0005 for ; Mon, 24 Aug 2020 04:41:18 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DD9D4180AD82F for ; Mon, 24 Aug 2020 08:41:17 +0000 (UTC) X-FDA: 77184817794.30.whip54_2a148dc27051 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 8BF40180B3CA4 for ; Mon, 24 Aug 2020 08:41:13 +0000 (UTC) X-HE-Tag: whip54_2a148dc27051 X-Filterd-Recvd-Size: 4835 Received: from relay3.sw.ru (relay.sw.ru [185.231.240.75]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Aug 2020 08:41:13 +0000 (UTC) Received: from [192.168.15.190] by relay3.sw.ru with esmtp (Exim 4.94) (envelope-from ) id 1kA7xd-000xLn-NE; Mon, 24 Aug 2020 11:36:13 +0300 Subject: Re: [PATCH 1/4] mm: Trial do_wp_page() simplification To: Peter Xu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Maya B . Gokhale" , Linus Torvalds , Yang Shi , Marty Mcfadden , Kirill Shutemov , Oleg Nesterov , Jann Horn , Jan Kara , Andrea Arcangeli , Christoph Hellwig , Andrew Morton References: <20200821234958.7896-1-peterx@redhat.com> <20200821234958.7896-2-peterx@redhat.com> From: Kirill Tkhai Message-ID: <42bc9a68-ef9e-2542-0b21-392a7f47bd74@virtuozzo.com> Date: Mon, 24 Aug 2020 11:36:22 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200821234958.7896-2-peterx@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 8BF40180B3CA4 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 22.08.2020 02:49, Peter Xu wrote: > From: Linus Torvalds > > How about we just make sure we're the only possible valid user fo the > page before we bother to reuse it? > > Simplify, simplify, simplify. > > And get rid of the nasty serialization on the page lock at the same time. > > Signed-off-by: Linus Torvalds > [peterx: add subject prefix] > Signed-off-by: Peter Xu > --- > mm/memory.c | 59 +++++++++++++++-------------------------------------- > 1 file changed, 17 insertions(+), 42 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 602f4283122f..cb9006189d22 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2927,50 +2927,25 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > * not dirty accountable. > */ > if (PageAnon(vmf->page)) { > - int total_map_swapcount; > - if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) || > - page_count(vmf->page) != 1)) > + struct page *page = vmf->page; > + > + /* PageKsm() doesn't necessarily raise the page refcount */ No, this is wrong. PageKSM() always raises refcount. There was another problem: KSM may raise refcount without lock_page(), and only then it takes the lock. See get_ksm_page(GET_KSM_PAGE_NOLOCK) for the details. So, reliable protection against parallel access requires to freeze page counter, which is made in reuse_ksm_page(). > + if (PageKsm(page) || page_count(page) != 1) > + goto copy; > + if (!trylock_page(page)) > + goto copy; > + if (PageKsm(page) || page_mapcount(page) != 1 || page_count(page) != 1) { > + unlock_page(page); > goto copy; > - if (!trylock_page(vmf->page)) { > - get_page(vmf->page); > - pte_unmap_unlock(vmf->pte, vmf->ptl); > - lock_page(vmf->page); > - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, > - vmf->address, &vmf->ptl); > - if (!pte_same(*vmf->pte, vmf->orig_pte)) { > - update_mmu_tlb(vma, vmf->address, vmf->pte); > - unlock_page(vmf->page); > - pte_unmap_unlock(vmf->pte, vmf->ptl); > - put_page(vmf->page); > - return 0; > - } > - put_page(vmf->page); > - } > - if (PageKsm(vmf->page)) { > - bool reused = reuse_ksm_page(vmf->page, vmf->vma, > - vmf->address); > - unlock_page(vmf->page); > - if (!reused) > - goto copy; > - wp_page_reuse(vmf); > - return VM_FAULT_WRITE; > - } > - if (reuse_swap_page(vmf->page, &total_map_swapcount)) { > - if (total_map_swapcount == 1) { > - /* > - * The page is all ours. Move it to > - * our anon_vma so the rmap code will > - * not search our parent or siblings. > - * Protected against the rmap code by > - * the page lock. > - */ > - page_move_anon_rmap(vmf->page, vma); > - } > - unlock_page(vmf->page); > - wp_page_reuse(vmf); > - return VM_FAULT_WRITE; > } > - unlock_page(vmf->page); > + /* > + * Ok, we've got the only map reference, and the only > + * page count reference, and the page is locked, > + * it's dark out, and we're wearing sunglasses. Hit it. > + */ > + wp_page_reuse(vmf); > + unlock_page(page); > + return VM_FAULT_WRITE; > } else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) == > (VM_WRITE|VM_SHARED))) { > return wp_page_shared(vmf); >