From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49964C07E9B for ; Tue, 6 Jul 2021 15:35:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C57B061C1E for ; Tue, 6 Jul 2021 15:35:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C57B061C1E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 360716B0036; Tue, 6 Jul 2021 11:35:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 310786B005D; Tue, 6 Jul 2021 11:35:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18A646B006C; Tue, 6 Jul 2021 11:35:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id DF79D6B0036 for ; Tue, 6 Jul 2021 11:35:26 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7223C8248D52 for ; Tue, 6 Jul 2021 15:35:25 +0000 (UTC) X-FDA: 78332562210.03.B23452F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id E48C490000B1 for ; Tue, 6 Jul 2021 15:35:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1625585724; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=a4kZkm/ribAVqSEiHP5Q3y5RoHp90rtHpwlN5Wsac70=; b=X5jish7xQc/CDTdF8W7LStBBX1SNbpc3IFdlQcdnm3UK8dqbKZVckFa3gvFPpn1eLvf/nB A+eVsOY8hT6BQLdEbCsZABeoNG8eOcJIX9C+g1GyHGl8Z+AK03cYN6KUVmap6t7sp/2CrX U9mRf7zmddDPNVBb/4kA+YkFHFRc8Kc= Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-192-krbiThmOPXaGGAFQRLYlAw-1; Tue, 06 Jul 2021 11:35:21 -0400 X-MC-Unique: krbiThmOPXaGGAFQRLYlAw-1 Received: by mail-qt1-f199.google.com with SMTP id u18-20020a05622a1992b029024f5f5d3a48so11404274qtc.2 for ; Tue, 06 Jul 2021 08:35:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=a4kZkm/ribAVqSEiHP5Q3y5RoHp90rtHpwlN5Wsac70=; b=oaKxpYl45Xq8PNhMG5q6ecurQD0bBaEZcTfTpRZXKPyUnKFr+YjyrK3a4czLpyLWa5 3Bu4W0QEbGiLtIBM+pkUo5GVTLhmT0jNoOviPlWCnh5NYMSF6fWBzZLqO1ff4v9ZeByT ov7wu+SGlbo6LcZ1puw7RcaxkkWIPeZHmuUZy6FASO3HhQHcD0JRNLMumXGNs41snmNY Lq1sD4Elm/UcpenRsMLQNx4NbBUIMSgMmvplcVHjwFJQNjkLU9JcXfFHbgQ0ms5E56uS WgwHf5zjSAD/UzC1lIxEXSfVHwUNOdjLRv5YVXMk5DjQC2ghHuc+u1A8IK0nxJ00UnqH YyaQ== X-Gm-Message-State: AOAM531qLOeWpoR1g9zinPZnjQsZUH0H5fUpecir5oYknOEQhnmxgcqM 5lCzc+efOFAK8X1DpVcEN2tjnoGw9ou1ciLtiVDop0YP/t6HzV1M5rv0Pi+ZdEoHJIgi09J3I4Z Go6RBN3IBA4Y= X-Received: by 2002:a05:620a:e12:: with SMTP id y18mr20545976qkm.106.1625585720614; Tue, 06 Jul 2021 08:35:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzB4JKFnwADX60ygqTprBfwDVLYXpYH+ZxF66dtAY5XI/X045YlM3wUSG8UVrmeoCjQO46t8w== X-Received: by 2002:a05:620a:e12:: with SMTP id y18mr20545942qkm.106.1625585720300; Tue, 06 Jul 2021 08:35:20 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id l5sm7026642qkb.62.2021.07.06.08.35.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 06 Jul 2021 08:35:19 -0700 (PDT) Date: Tue, 6 Jul 2021 11:35:18 -0400 From: Peter Xu To: Alistair Popple Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mike Kravetz , "Kirill A . Shutemov" , Jason Gunthorpe , Hugh Dickins , Matthew Wilcox , Andrew Morton , Miaohe Lin , Jerome Glisse , Nadav Amit , Axel Rasmussen , Andrea Arcangeli , Mike Rapoport Subject: Re: [PATCH v3 11/27] shmem/userfaultfd: Persist uffd-wp bit across zapping for file-backed Message-ID: References: <20210527201927.29586-1-peterx@redhat.com> <1857347.At2d1zFpmm@nvdebian> <3895609.yFXQBJUcoq@nvdebian> MIME-Version: 1.0 In-Reply-To: <3895609.yFXQBJUcoq@nvdebian> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=X5jish7x; spf=none (imf28.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E48C490000B1 X-Stat-Signature: c573w7fw49nh64aryt1mew4bg9cjeab9 X-HE-Tag: 1625585724-790392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jul 06, 2021 at 03:40:42PM +1000, Alistair Popple wrote: > > > > > > > > struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, > > > > > > > > pte_t pte); > > > > > > > > struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, > > > > > > > > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h > > > > > > > > index 355ea1ee32bd..c29a6ef3a642 100644 > > > > > > > > --- a/include/linux/mm_inline.h > > > > > > > > +++ b/include/linux/mm_inline.h > > > > > > > > @@ -4,6 +4,8 @@ > > > > > > > > > > > > > > > > #include > > > > > > > > #include > > > > > > > > +#include > > > > > > > > +#include > > > > > > > > > > > > > > > > /** > > > > > > > > * page_is_file_lru - should the page be on a file LRU or anon LRU? > > > > > > > > @@ -104,4 +106,45 @@ static __always_inline void del_page_from_lru_list(struct page *page, > > > > > > > > update_lru_size(lruvec, page_lru(page), page_zonenum(page), > > > > > > > > -thp_nr_pages(page)); > > > > > > > > } > > > > > > > > + > > > > > > > > +/* > > > > > > > > + * If this pte is wr-protected by uffd-wp in any form, arm the special pte to > > > > > > > > + * replace a none pte. NOTE! This should only be called when *pte is already > > > > > > > > + * cleared so we will never accidentally replace something valuable. Meanwhile > > > > > > > > + * none pte also means we are not demoting the pte so if tlb flushed then we > > > > > > > > + * don't need to do it again; otherwise if tlb flush is postponed then it's > > > > > > > > + * even better. > > > > > > > > + * > > > > > > > > + * Must be called with pgtable lock held. > > > > > > > > + */ > > > > > > > > +static inline void > > > > > > > > +pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr, > > > > > > > > + pte_t *pte, pte_t pteval) > > > > > > > > +{ > > > > > > > > +#ifdef CONFIG_USERFAULTFD > > > > > > > > + bool arm_uffd_pte = false; > > > > > > > > + > > > > > > > > + /* The current status of the pte should be "cleared" before calling */ > > > > > > > > + WARN_ON_ONCE(!pte_none(*pte)); > > > > > > > > + > > > > > > > > + if (vma_is_anonymous(vma)) > > > > > > > > + return; > > > > > > > > + > > > > > > > > + /* A uffd-wp wr-protected normal pte */ > > > > > > > > + if (unlikely(pte_present(pteval) && pte_uffd_wp(pteval))) > > > > > > > > + arm_uffd_pte = true; > > > > > > > > + > > > > > > > > + /* > > > > > > > > + * A uffd-wp wr-protected swap pte. Note: this should even work for > > > > > > > > + * pte_swp_uffd_wp_special() too. > > > > > > > > + */ > > > > > > > > > > > > > > I'm probably missing something but when can we actually have this case and why > > > > > > > would we want to leave a special pte behind? From what I can tell this is > > > > > > > called from try_to_unmap_one() where this won't be true or from zap_pte_range() > > > > > > > when not skipping swap pages. > > > > > > > > > > > > Yes this is a good question.. > > > > > > > > > > > > Initially I made this function make sure I cover all forms of uffd-wp bit, that > > > > > > contains both swap and present ptes; imho that's pretty safe. However for > > > > > > !anonymous cases we don't keep swap entry inside pte even if swapped out, as > > > > > > they should reside in shmem page cache indeed. The only missing piece seems to > > > > > > be the device private entries as you also spotted below. > > > > > > > > > > Yes, I think it's *probably* safe although I don't yet have a strong opinion > > > > > here ... > > > > > > > > > > > > > + if (unlikely(is_swap_pte(pteval) && pte_swp_uffd_wp(pteval))) > > > > > > > > > > ... however if this can never happen would a WARN_ON() be better? It would also > > > > > mean you could remove arm_uffd_pte. > > > > > > > > Hmm, after a second thought I think we can't make it a WARN_ON_ONCE().. this > > > > can still be useful for private mapping of shmem files: in that case we'll have > > > > swap entry stored in pte not page cache, so after page reclaim it will contain > > > > a valid swap entry, while it's still "!anonymous". [1] > > > > > > There's something (probably obvious) I must still be missing here. During > > > reclaim won't a private shmem mapping still have a present pteval here? > > > Therefore it won't trigger this case - the uffd wp bit is set when the swap > > > entry is established further down in try_to_unmap_one() right? > > > > I agree if it's at the point when it get reclaimed, however what if we zap a > > pte of a page already got reclaimed? It should have the swap pte installed, > > imho, which will have "is_swap_pte(pteval) && pte_swp_uffd_wp(pteval)"==true. > > Apologies for the delay getting back to this, I hope to find some more time > to look at this again this week. No problem, please take your time on reviewing the series. > > I guess what I am missing is why we care about a swap pte for a reclaimed page > getting zapped. I thought that would imply the mapping was getting torn down, > although I suppose in that case you still want the uffd-wp to apply in case a > new mapping appears there? For the torn down case it'll always have ZAP_FLAG_DROP_FILE_UFFD_WP set, so pte_install_uffd_wp_if_needed() won't be called, as zap_drop_file_uffd_wp() will return true: static inline void zap_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, struct zap_details *details, pte_t pteval) { if (zap_drop_file_uffd_wp(details)) return; pte_install_uffd_wp_if_needed(vma, addr, pte, pteval); } If you see it's non-trivial to fully digest all the caller stacks of it. What I wanted to do with pte_install_uffd_wp_if_needed is simply to provide a helper that can convert any form of uffd-wp ptes into a pte marker before being set as none pte. Since uffd-wp can exist in two forms (either present, or swap), then cover all these two forms (and for swap form also cover the uffd-wp special pte itself) is very clear idea and easy to understand to me. I don't even need to worry about who is calling it, and which case can be swap pte, which case must not - we just call it when we want to persist the uffd-wp bit (after a pte got cleared). That's why in all cases I still prefer to keep it as is, as it just makes things straightforward to me. Thanks, -- Peter Xu