From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16B3AC433DB for ; Thu, 24 Dec 2020 04:06:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C87AD22A99 for ; Thu, 24 Dec 2020 04:06:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727591AbgLXEFy (ORCPT ); Wed, 23 Dec 2020 23:05:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725897AbgLXEFy (ORCPT ); Wed, 23 Dec 2020 23:05:54 -0500 Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB33CC061794 for ; Wed, 23 Dec 2020 20:05:13 -0800 (PST) Received: by mail-ot1-x331.google.com with SMTP id b24so954833otj.0 for ; Wed, 23 Dec 2020 20:05:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=j1iWYJ4OoWkTSKGPyvK/vgdTSKX1fAMT3ed1oGlj0mNTY+Mdfoi+Jv3ez6lQwjqZKu k+0mnVBsW3gUfQGs3JEdL9vf3JSCAW8Tst41xH9w5DtQsmLhuhe5/OhVyjTesZmc63er OlYRS7B3glkt+fkOrspC2/M+BMiKSKhjnbOM38n4jZv2cxQA3C6wznsnuFR5mro4JOSm D0jWu9EG2UkQue4rxOEpBLfIAuXkUMRuWKL6fTmVlwVCnZ9i1FcTshST+OuDmfkCDtbZ X2LJYxeUtIEAVEk571Q+/YBUOHKN02nPPCKpZAiSojkl7OFGXZSmWkBjbonTJZ1hMcaF GbEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=cx2Ph+m50rgLuozrYPCvDSP3g3mFkRquZ2lsjzVN0Tr2mFQFwOOlMlH5qeY458wl1v eGRuJDU2EMe20qIbWBhY9r+sVRK99M4nZv8uIpIaxCp0KcPY3iQCEc126/JnvaIsmOkO hSiQP9WdEOGdgyiMN73NrwLcT1Ue3NgQDS5rTO/PCCIPuq/9HzNmCnXh1lAJroJtyNr/ YDtZAIaQFBLkTycOQL45MDagjVK7Rmh70HV1tyI59PJovxNanFbOJflWfURo7HZou+mV ML9yndA7NjIGRovviG1aXwPcD5nNA2gYD/Pb6v8CJnoHaK5FHCuYYmWlmoWjCYwSet5Q 7D4Q== X-Gm-Message-State: AOAM530iZEHDtl6y6pipiyPSNRKcRqLlol+FqYDmDF0JveMWL9nNoUFO uvmmgVjf3YLBo1Mu2wcmi/YpFQ== X-Google-Smtp-Source: ABdhPJzgNgkJw45qnykrLy8Ni/F5/+cU+fKpbDKpXRyEFgvhAXYmMwR7eRVUvoJVv84wfVfHFXfSWg== X-Received: by 2002:a9d:650f:: with SMTP id i15mr20667065otl.347.1608782713166; Wed, 23 Dec 2020 20:05:13 -0800 (PST) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id u141sm6325608oie.46.2020.12.23.20.05.11 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Wed, 23 Dec 2020 20:05:12 -0800 (PST) Date: Wed, 23 Dec 2020 20:04:54 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: "Kirill A. Shutemov" cc: Linus Torvalds , Matthew Wilcox , "Kirill A. Shutemov" , Will Deacon , Linux Kernel Mailing List , Linux-MM , Linux ARM , Catalin Marinas , Jan Kara , Minchan Kim , Andrew Morton , Vinayak Menon , Android Kernel Team Subject: Re: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting In-Reply-To: <20201222100047.p5zdb4ghagncq2oe@box> Message-ID: References: <20201214160724.ewhjqoi32chheone@box> <20201216170703.o5lpsnjfmoj7f3ml@box> <20201217105409.2gacwgg7rco2ft3m@box> <20201218110400.yve45r3zsv7qgfa3@box> <20201219124103.w6isern3ywc7xbur@box> <20201222100047.p5zdb4ghagncq2oe@box> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 22 Dec 2020, Kirill A. Shutemov wrote: > > Updated patch is below. > > From 0ec1bc1fe95587350ac4f4c866d6482383740b36 Mon Sep 17 00:00:00 2001 > From: "Kirill A. Shutemov" > Date: Sat, 19 Dec 2020 15:19:23 +0300 > Subject: [PATCH] mm: Cleanup faultaround and finish_fault() codepaths > > alloc_set_pte() has two users with different requirements: in the > faultaround code, it called from an atomic context and PTE page table > has to be preallocated. finish_fault() can sleep and allocate page table > as needed. > > PTL locking rules are also strange, hard to follow and overkill for > finish_fault(). > > Let's untangle the mess. alloc_set_pte() has gone now. All locking is > explicit. > > The price is some code duplication to handle huge pages in faultaround > path, but it should be fine, having overall improvement in readability. > > Signed-off-by: Kirill A. Shutemov It's not ready yet. I won't pretend to have reviewed, but I did try applying and running with it: mostly it seems to work fine, but turned out to be leaking huge pages (with vmstat's thp_split_page_failed growing bigger and bigger as page reclaim cannot get rid of them). Aside from the actual bug, filemap_map_pmd() seems suboptimal at present: comments below (plus one comment in do_anonymous_page()). > diff --git a/mm/filemap.c b/mm/filemap.c > index 0b2067b3c328..f8fdbe079375 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2831,10 +2832,74 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) > } > EXPORT_SYMBOL(filemap_fault); > > +static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page, > + struct xa_state *xas) > +{ > + struct vm_area_struct *vma = vmf->vma; > + struct address_space *mapping = vma->vm_file->f_mapping; > + > + /* Huge page is mapped? No need to proceed. */ > + if (pmd_trans_huge(*vmf->pmd)) > + return true; > + > + if (xa_is_value(page)) > + goto nohuge; I think it would be easier to follow if filemap_map_pages() never passed this an xa_is_value(page): probably just skip them in its initial xas_next_entry() loop. > + > + if (!pmd_none(*vmf->pmd)) > + goto nohuge; Then at nohuge it unconditionally takes pmd_lock(), finds !pmd_none, and unlocks again: unnecessary overhead I believe we did not have before. > + > + if (!PageTransHuge(page) || PageLocked(page)) > + goto nohuge; So if PageTransHuge, but someone else temporarily holds PageLocked, we insert a page table at nohuge, sadly preventing it from being mapped here later by huge pmd. > + > + if (!page_cache_get_speculative(page)) > + goto nohuge; > + > + if (page != xas_reload(xas)) > + goto unref; > + > + if (!PageTransHuge(page)) > + goto unref; > + > + if (!PageUptodate(page) || PageReadahead(page) || PageHWPoison(page)) > + goto unref; > + > + if (!trylock_page(page)) > + goto unref; > + > + if (page->mapping != mapping || !PageUptodate(page)) > + goto unlock; > + > + if (xas->xa_index >= DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE)) > + goto unlock; > + > + do_set_pmd(vmf, page); Here is the source of the huge page leak: do_set_pmd() can fail (and we would do better to have skipped most of its failure cases long before getting this far). It worked without leaking once I patched it: - do_set_pmd(vmf, page); - unlock_page(page); - return true; + if (do_set_pmd(vmf, page) == 0) { + unlock_page(page); + return true; + } > + unlock_page(page); > + return true; > +unlock: > + unlock_page(page); > +unref: > + put_page(page); > +nohuge: > + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > + if (likely(pmd_none(*vmf->pmd))) { > + mm_inc_nr_ptes(vma->vm_mm); > + pmd_populate(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); > + vmf->prealloc_pte = NULL; > + } > + spin_unlock(vmf->ptl); I think it's a bit weird to hide this page table insertion inside filemap_map_pmd() (I guess you're thinking that this function deals with pmd level, but I'd find it easier to have a filemap_map_huge() dealing with the huge mapping). Better to do it on return into filemap_map_pages(); maybe filemap_map_pmd() or filemap_map_huge() would then need to return vm_fault_t rather than bool, I didn't try. > + > + /* See comment in handle_pte_fault() */ > + if (pmd_devmap_trans_unstable(vmf->pmd)) > + return true; > + > + return false; > +} ... > diff --git a/mm/memory.c b/mm/memory.c > index c48f8df6e502..96d62774096a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3490,7 +3490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (pte_alloc(vma->vm_mm, vmf->pmd)) > return VM_FAULT_OOM; > > - /* See the comment in pte_alloc_one_map() */ > + /* See the comment in map_set_pte() */ No, no such function: probably should be like the others and say /* See comment in handle_pte_fault() */ Hugh From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-24.8 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AC93C433DB for ; Thu, 24 Dec 2020 04:05:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A725922A99 for ; Thu, 24 Dec 2020 04:05:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A725922A99 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 11EF18D0070; Wed, 23 Dec 2020 23:05:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D0658D0063; Wed, 23 Dec 2020 23:05:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F00B68D0070; Wed, 23 Dec 2020 23:05:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id D69418D0063 for ; Wed, 23 Dec 2020 23:05:14 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 92D368249980 for ; Thu, 24 Dec 2020 04:05:14 +0000 (UTC) X-FDA: 77626835748.09.sand61_17185152746e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 76942180AD81D for ; Thu, 24 Dec 2020 04:05:14 +0000 (UTC) X-HE-Tag: sand61_17185152746e X-Filterd-Recvd-Size: 9048 Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 24 Dec 2020 04:05:13 +0000 (UTC) Received: by mail-ot1-f53.google.com with SMTP id n42so893593ota.12 for ; Wed, 23 Dec 2020 20:05:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=j1iWYJ4OoWkTSKGPyvK/vgdTSKX1fAMT3ed1oGlj0mNTY+Mdfoi+Jv3ez6lQwjqZKu k+0mnVBsW3gUfQGs3JEdL9vf3JSCAW8Tst41xH9w5DtQsmLhuhe5/OhVyjTesZmc63er OlYRS7B3glkt+fkOrspC2/M+BMiKSKhjnbOM38n4jZv2cxQA3C6wznsnuFR5mro4JOSm D0jWu9EG2UkQue4rxOEpBLfIAuXkUMRuWKL6fTmVlwVCnZ9i1FcTshST+OuDmfkCDtbZ X2LJYxeUtIEAVEk571Q+/YBUOHKN02nPPCKpZAiSojkl7OFGXZSmWkBjbonTJZ1hMcaF GbEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=ZRIuv6wgc2LVVooV+tCb2a5XStnRVrNljIEVl2+W1olYE35MO2S7itIrHA+Z4vs04h WMmyCBfVwAxstIMpr/EwznfmX/ctWm9uD83CG21v2V6z5e3xMLkWZ/65uduPQZbj6j5j suIkF6V1QU1Nm7XufNK77vspU6uc2Xbs4t3VZc6ZI4GvhsD2jEo0p7uxuh2o8KjeRRh9 xSgy5hFgPsF4+vhV4GyPdP2UYZEI1Sq3j7G7XxeDjeAZRenYw8XK+3Ob72WnSWoQpVcd N88UUa14MHVDkZO/z1HP7pHmCbU215ligaQ/vaE+FBtrLiLdEa8TveukjxVVKDP+Sy22 MFuw== X-Gm-Message-State: AOAM5316A8aOSyUmymk73pWTXAS/PVkUnNMZCd+pQv8WaUEsXroXxif2 2Ud81Whq01XwIq/1YZXW/f7dsw== X-Google-Smtp-Source: ABdhPJzgNgkJw45qnykrLy8Ni/F5/+cU+fKpbDKpXRyEFgvhAXYmMwR7eRVUvoJVv84wfVfHFXfSWg== X-Received: by 2002:a9d:650f:: with SMTP id i15mr20667065otl.347.1608782713166; Wed, 23 Dec 2020 20:05:13 -0800 (PST) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id u141sm6325608oie.46.2020.12.23.20.05.11 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Wed, 23 Dec 2020 20:05:12 -0800 (PST) Date: Wed, 23 Dec 2020 20:04:54 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: "Kirill A. Shutemov" cc: Linus Torvalds , Matthew Wilcox , "Kirill A. Shutemov" , Will Deacon , Linux Kernel Mailing List , Linux-MM , Linux ARM , Catalin Marinas , Jan Kara , Minchan Kim , Andrew Morton , Vinayak Menon , Android Kernel Team Subject: Re: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting In-Reply-To: <20201222100047.p5zdb4ghagncq2oe@box> Message-ID: References: <20201214160724.ewhjqoi32chheone@box> <20201216170703.o5lpsnjfmoj7f3ml@box> <20201217105409.2gacwgg7rco2ft3m@box> <20201218110400.yve45r3zsv7qgfa3@box> <20201219124103.w6isern3ywc7xbur@box> <20201222100047.p5zdb4ghagncq2oe@box> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 22 Dec 2020, Kirill A. Shutemov wrote: > > Updated patch is below. > > From 0ec1bc1fe95587350ac4f4c866d6482383740b36 Mon Sep 17 00:00:00 2001 > From: "Kirill A. Shutemov" > Date: Sat, 19 Dec 2020 15:19:23 +0300 > Subject: [PATCH] mm: Cleanup faultaround and finish_fault() codepaths > > alloc_set_pte() has two users with different requirements: in the > faultaround code, it called from an atomic context and PTE page table > has to be preallocated. finish_fault() can sleep and allocate page table > as needed. > > PTL locking rules are also strange, hard to follow and overkill for > finish_fault(). > > Let's untangle the mess. alloc_set_pte() has gone now. All locking is > explicit. > > The price is some code duplication to handle huge pages in faultaround > path, but it should be fine, having overall improvement in readability. > > Signed-off-by: Kirill A. Shutemov It's not ready yet. I won't pretend to have reviewed, but I did try applying and running with it: mostly it seems to work fine, but turned out to be leaking huge pages (with vmstat's thp_split_page_failed growing bigger and bigger as page reclaim cannot get rid of them). Aside from the actual bug, filemap_map_pmd() seems suboptimal at present: comments below (plus one comment in do_anonymous_page()). > diff --git a/mm/filemap.c b/mm/filemap.c > index 0b2067b3c328..f8fdbe079375 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2831,10 +2832,74 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) > } > EXPORT_SYMBOL(filemap_fault); > > +static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page, > + struct xa_state *xas) > +{ > + struct vm_area_struct *vma = vmf->vma; > + struct address_space *mapping = vma->vm_file->f_mapping; > + > + /* Huge page is mapped? No need to proceed. */ > + if (pmd_trans_huge(*vmf->pmd)) > + return true; > + > + if (xa_is_value(page)) > + goto nohuge; I think it would be easier to follow if filemap_map_pages() never passed this an xa_is_value(page): probably just skip them in its initial xas_next_entry() loop. > + > + if (!pmd_none(*vmf->pmd)) > + goto nohuge; Then at nohuge it unconditionally takes pmd_lock(), finds !pmd_none, and unlocks again: unnecessary overhead I believe we did not have before. > + > + if (!PageTransHuge(page) || PageLocked(page)) > + goto nohuge; So if PageTransHuge, but someone else temporarily holds PageLocked, we insert a page table at nohuge, sadly preventing it from being mapped here later by huge pmd. > + > + if (!page_cache_get_speculative(page)) > + goto nohuge; > + > + if (page != xas_reload(xas)) > + goto unref; > + > + if (!PageTransHuge(page)) > + goto unref; > + > + if (!PageUptodate(page) || PageReadahead(page) || PageHWPoison(page)) > + goto unref; > + > + if (!trylock_page(page)) > + goto unref; > + > + if (page->mapping != mapping || !PageUptodate(page)) > + goto unlock; > + > + if (xas->xa_index >= DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE)) > + goto unlock; > + > + do_set_pmd(vmf, page); Here is the source of the huge page leak: do_set_pmd() can fail (and we would do better to have skipped most of its failure cases long before getting this far). It worked without leaking once I patched it: - do_set_pmd(vmf, page); - unlock_page(page); - return true; + if (do_set_pmd(vmf, page) == 0) { + unlock_page(page); + return true; + } > + unlock_page(page); > + return true; > +unlock: > + unlock_page(page); > +unref: > + put_page(page); > +nohuge: > + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > + if (likely(pmd_none(*vmf->pmd))) { > + mm_inc_nr_ptes(vma->vm_mm); > + pmd_populate(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); > + vmf->prealloc_pte = NULL; > + } > + spin_unlock(vmf->ptl); I think it's a bit weird to hide this page table insertion inside filemap_map_pmd() (I guess you're thinking that this function deals with pmd level, but I'd find it easier to have a filemap_map_huge() dealing with the huge mapping). Better to do it on return into filemap_map_pages(); maybe filemap_map_pmd() or filemap_map_huge() would then need to return vm_fault_t rather than bool, I didn't try. > + > + /* See comment in handle_pte_fault() */ > + if (pmd_devmap_trans_unstable(vmf->pmd)) > + return true; > + > + return false; > +} ... > diff --git a/mm/memory.c b/mm/memory.c > index c48f8df6e502..96d62774096a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3490,7 +3490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (pte_alloc(vma->vm_mm, vmf->pmd)) > return VM_FAULT_OOM; > > - /* See the comment in pte_alloc_one_map() */ > + /* See the comment in map_set_pte() */ No, no such function: probably should be like the others and say /* See comment in handle_pte_fault() */ Hugh From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CF76C433E0 for ; Thu, 24 Dec 2020 04:07:03 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id ECB0422A99 for ; Thu, 24 Dec 2020 04:07:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECB0422A99 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:Message-ID:In-Reply-To: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WwSB9YetwHUJSmCtEsAqy0fSPMRM6tKwYJXAY1da/WY=; b=rMLnLVRGEKM1/oDcJab905CrX XE34VPhoNwY/L1t2+c0rWryqfl5RDdVgfoU0kF4HBOTEmW8jaSyzeQxAnb2Obw713XW0XpqkkHbc8 UPkqfNWVLCwbhMoAqraVrmMpNHQn98qCwNuORgn3fQN55Ft14fAw6zOJd+4Z6Z5ZqZ4EE7xHfxgR6 xpcBv9/qxykoRUTMhP7otxH+PcqXz98wSv7+DLywin1EkoaJiy0mtCzjjm47DIcku8e0Tf0H539Qd w06+Ud66o8eF0+nB+6whPgdeNCfFsPV5JolpKy4wODs4NvmUf5I/A3rxN9s14ToKo84hPAHLvfQ1m x87JMsGcQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1ksHsR-0008R6-TK; Thu, 24 Dec 2020 04:05:23 +0000 Received: from mail-ot1-x334.google.com ([2607:f8b0:4864:20::334]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1ksHsN-0008Pv-55 for linux-arm-kernel@lists.infradead.org; Thu, 24 Dec 2020 04:05:21 +0000 Received: by mail-ot1-x334.google.com with SMTP id d20so941573otl.3 for ; Wed, 23 Dec 2020 20:05:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=j1iWYJ4OoWkTSKGPyvK/vgdTSKX1fAMT3ed1oGlj0mNTY+Mdfoi+Jv3ez6lQwjqZKu k+0mnVBsW3gUfQGs3JEdL9vf3JSCAW8Tst41xH9w5DtQsmLhuhe5/OhVyjTesZmc63er OlYRS7B3glkt+fkOrspC2/M+BMiKSKhjnbOM38n4jZv2cxQA3C6wznsnuFR5mro4JOSm D0jWu9EG2UkQue4rxOEpBLfIAuXkUMRuWKL6fTmVlwVCnZ9i1FcTshST+OuDmfkCDtbZ X2LJYxeUtIEAVEk571Q+/YBUOHKN02nPPCKpZAiSojkl7OFGXZSmWkBjbonTJZ1hMcaF GbEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=794qtNLBV01wH3kQUA1H50hyCeBN4AhD4eZP1owPmp4=; b=sJ9xkWfXU7vvArlDHQGUwxN6na5iRXzeKMzlmpZ59kO5meBA2H4NmWeR7WVGfhT55g ug6/gU33xqm/hWK7fCTBScIbxV6eJ7DkiQTnfjbPPD15iuiPWP1anu+9H1XWxpmhH+IR LJTTPLx3wX/6IrfQgYmf4rlmmW8WjSF7P6TAQ4laZPjQ6K0oq/nYDOvAWV90YVTEpeQs 9hP/bAKmhb/d022SO8WJw5a2qZ5Au2S4H83/0h6YRBvuOJKRY4itF1kiXOWnyLPKERh3 gR3tXkI9BVZJ/m3FK50slCEexGV6AcdMG6iJtP9YjBYUZ/6cg0Tz4RihJvvA/I9x4QrS m/2g== X-Gm-Message-State: AOAM530bM2jzmNpiY7FSsGqDrSmoTx+UHcsvlbWxk7DMMuMyYVrVkLEL 0yEwvwN+ZCS2CQMLHRz7mNIXjQ== X-Google-Smtp-Source: ABdhPJzgNgkJw45qnykrLy8Ni/F5/+cU+fKpbDKpXRyEFgvhAXYmMwR7eRVUvoJVv84wfVfHFXfSWg== X-Received: by 2002:a9d:650f:: with SMTP id i15mr20667065otl.347.1608782713166; Wed, 23 Dec 2020 20:05:13 -0800 (PST) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id u141sm6325608oie.46.2020.12.23.20.05.11 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Wed, 23 Dec 2020 20:05:12 -0800 (PST) Date: Wed, 23 Dec 2020 20:04:54 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: "Kirill A. Shutemov" Subject: Re: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting In-Reply-To: <20201222100047.p5zdb4ghagncq2oe@box> Message-ID: References: <20201214160724.ewhjqoi32chheone@box> <20201216170703.o5lpsnjfmoj7f3ml@box> <20201217105409.2gacwgg7rco2ft3m@box> <20201218110400.yve45r3zsv7qgfa3@box> <20201219124103.w6isern3ywc7xbur@box> <20201222100047.p5zdb4ghagncq2oe@box> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201223_230519_216739_A7E18596 X-CRM114-Status: GOOD ( 34.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Android Kernel Team , Jan Kara , Minchan Kim , Linus Torvalds , Linux Kernel Mailing List , Matthew Wilcox , Linux-MM , Vinayak Menon , Linux ARM , Catalin Marinas , Andrew Morton , Will Deacon , "Kirill A. Shutemov" Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 22 Dec 2020, Kirill A. Shutemov wrote: > > Updated patch is below. > > From 0ec1bc1fe95587350ac4f4c866d6482383740b36 Mon Sep 17 00:00:00 2001 > From: "Kirill A. Shutemov" > Date: Sat, 19 Dec 2020 15:19:23 +0300 > Subject: [PATCH] mm: Cleanup faultaround and finish_fault() codepaths > > alloc_set_pte() has two users with different requirements: in the > faultaround code, it called from an atomic context and PTE page table > has to be preallocated. finish_fault() can sleep and allocate page table > as needed. > > PTL locking rules are also strange, hard to follow and overkill for > finish_fault(). > > Let's untangle the mess. alloc_set_pte() has gone now. All locking is > explicit. > > The price is some code duplication to handle huge pages in faultaround > path, but it should be fine, having overall improvement in readability. > > Signed-off-by: Kirill A. Shutemov It's not ready yet. I won't pretend to have reviewed, but I did try applying and running with it: mostly it seems to work fine, but turned out to be leaking huge pages (with vmstat's thp_split_page_failed growing bigger and bigger as page reclaim cannot get rid of them). Aside from the actual bug, filemap_map_pmd() seems suboptimal at present: comments below (plus one comment in do_anonymous_page()). > diff --git a/mm/filemap.c b/mm/filemap.c > index 0b2067b3c328..f8fdbe079375 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2831,10 +2832,74 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) > } > EXPORT_SYMBOL(filemap_fault); > > +static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page, > + struct xa_state *xas) > +{ > + struct vm_area_struct *vma = vmf->vma; > + struct address_space *mapping = vma->vm_file->f_mapping; > + > + /* Huge page is mapped? No need to proceed. */ > + if (pmd_trans_huge(*vmf->pmd)) > + return true; > + > + if (xa_is_value(page)) > + goto nohuge; I think it would be easier to follow if filemap_map_pages() never passed this an xa_is_value(page): probably just skip them in its initial xas_next_entry() loop. > + > + if (!pmd_none(*vmf->pmd)) > + goto nohuge; Then at nohuge it unconditionally takes pmd_lock(), finds !pmd_none, and unlocks again: unnecessary overhead I believe we did not have before. > + > + if (!PageTransHuge(page) || PageLocked(page)) > + goto nohuge; So if PageTransHuge, but someone else temporarily holds PageLocked, we insert a page table at nohuge, sadly preventing it from being mapped here later by huge pmd. > + > + if (!page_cache_get_speculative(page)) > + goto nohuge; > + > + if (page != xas_reload(xas)) > + goto unref; > + > + if (!PageTransHuge(page)) > + goto unref; > + > + if (!PageUptodate(page) || PageReadahead(page) || PageHWPoison(page)) > + goto unref; > + > + if (!trylock_page(page)) > + goto unref; > + > + if (page->mapping != mapping || !PageUptodate(page)) > + goto unlock; > + > + if (xas->xa_index >= DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE)) > + goto unlock; > + > + do_set_pmd(vmf, page); Here is the source of the huge page leak: do_set_pmd() can fail (and we would do better to have skipped most of its failure cases long before getting this far). It worked without leaking once I patched it: - do_set_pmd(vmf, page); - unlock_page(page); - return true; + if (do_set_pmd(vmf, page) == 0) { + unlock_page(page); + return true; + } > + unlock_page(page); > + return true; > +unlock: > + unlock_page(page); > +unref: > + put_page(page); > +nohuge: > + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > + if (likely(pmd_none(*vmf->pmd))) { > + mm_inc_nr_ptes(vma->vm_mm); > + pmd_populate(vma->vm_mm, vmf->pmd, vmf->prealloc_pte); > + vmf->prealloc_pte = NULL; > + } > + spin_unlock(vmf->ptl); I think it's a bit weird to hide this page table insertion inside filemap_map_pmd() (I guess you're thinking that this function deals with pmd level, but I'd find it easier to have a filemap_map_huge() dealing with the huge mapping). Better to do it on return into filemap_map_pages(); maybe filemap_map_pmd() or filemap_map_huge() would then need to return vm_fault_t rather than bool, I didn't try. > + > + /* See comment in handle_pte_fault() */ > + if (pmd_devmap_trans_unstable(vmf->pmd)) > + return true; > + > + return false; > +} ... > diff --git a/mm/memory.c b/mm/memory.c > index c48f8df6e502..96d62774096a 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3490,7 +3490,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > if (pte_alloc(vma->vm_mm, vmf->pmd)) > return VM_FAULT_OOM; > > - /* See the comment in pte_alloc_one_map() */ > + /* See the comment in map_set_pte() */ No, no such function: probably should be like the others and say /* See comment in handle_pte_fault() */ Hugh _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel