From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61340C433E0 for ; Tue, 11 Aug 2020 19:24:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EA9AD20781 for ; Tue, 11 Aug 2020 19:24:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="IlP6ZNw/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA9AD20781 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 425766B0002; Tue, 11 Aug 2020 15:24:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D6346B0005; Tue, 11 Aug 2020 15:24:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EB896B0006; Tue, 11 Aug 2020 15:24:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 1922D6B0002 for ; Tue, 11 Aug 2020 15:24:42 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 97F0F5825 for ; Tue, 11 Aug 2020 19:24:41 +0000 (UTC) X-FDA: 77139264762.19.knife05_51113db26fe5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 5FFD91AD1B5 for ; Tue, 11 Aug 2020 19:24:41 +0000 (UTC) X-HE-Tag: knife05_51113db26fe5 X-Filterd-Recvd-Size: 8404 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 11 Aug 2020 19:24:40 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id d2so7294101lfj.1 for ; Tue, 11 Aug 2020 12:24:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=XpN6xraykRS+TA74TQmy+03p34x/yy7UF3SXmuV0HYg=; b=IlP6ZNw/3TyULOWMmm7t1u6tgdDTVgMSnIjrGb6EkAd9JLYt9PMPIxqmlU/yRqicXx g3KFM4RWJY5FFrSPJ7FdsHTuMtA7c6A/+xD8eyvb54B0lJgR4si9iu7qlOyaeog9UO+R 5P89PzEVoqEh6ADwIcGiRqr7j2tIaouCFYfBs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=XpN6xraykRS+TA74TQmy+03p34x/yy7UF3SXmuV0HYg=; b=PhB/Ck6kIskEpoTndbk9D6K0SOxAlTojbXkFPFp2KrXHC9hDwBJq04Rn6GeOfJ1bOW InIIXgLRZYQPJg1glvKoWOWofiqvL1N0vAprX4AzJ7EP7faH5QkTJqJQY7iLeA4FWVkn TNwzC1jv0gz7VFbRTRRwCUtIWKnaR4TL63WQ3Fbpb6KMRJpYbbj84UVxHs8xYXYa6120 9HlJgE/2BebZ8JjmjUR0w6Cl6aoH/vuqX5SpuAveurgwy1MxGmCj3lYuRGTORgrjwXzP ECeBJjYacIPTkwxtCmbOzn2cVgqLoUekk+wkIqBQnH35KAkhWOYqKFKyiGbF+dp+NL5b ffhA== X-Gm-Message-State: AOAM531jhQ9hqqzX/25yHcJflgEX9fcux8ybX+0qrfgMCfXNtcOjgjC4 WLVK1KfR6lKUTc7AH37jvuX50JQFsjk= X-Google-Smtp-Source: ABdhPJw6mpHT7XPclMOgRFazpC/gG8/HVF5E5C501pqTyk4whTD1mzmLl62/3YDnAN9Ew6bjlF/Gog== X-Received: by 2002:a19:cccc:: with SMTP id c195mr3835316lfg.88.1597173878748; Tue, 11 Aug 2020 12:24:38 -0700 (PDT) Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com. [209.85.208.174]) by smtp.gmail.com with ESMTPSA id y21sm10228598ljk.129.2020.08.11.12.24.37 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 11 Aug 2020 12:24:37 -0700 (PDT) Received: by mail-lj1-f174.google.com with SMTP id i10so14827766ljn.2 for ; Tue, 11 Aug 2020 12:24:37 -0700 (PDT) X-Received: by 2002:a2e:2e04:: with SMTP id u4mr3493591lju.102.1597173876835; Tue, 11 Aug 2020 12:24:36 -0700 (PDT) MIME-Version: 1.0 References: <20200811183950.10603-1-peterx@redhat.com> In-Reply-To: <20200811183950.10603-1-peterx@redhat.com> From: Linus Torvalds Date: Tue, 11 Aug 2020 12:24:20 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v3] mm/gup: Allow real explicit breaking of COW To: Peter Xu Cc: Linux-MM , Linux Kernel Mailing List , Andrew Morton , Marty Mcfadden , "Maya B . Gokhale" , Andrea Arcangeli , Jann Horn , Christoph Hellwig , Oleg Nesterov , Kirill Shutemov , Jan Kara Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 5FFD91AD1B5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 11, 2020 at 11:40 AM Peter Xu wrote: > > index 206f52b36ffb..c88f773d03af 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1296,7 +1296,17 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) > if (reuse_swap_page(page, NULL)) { > pmd_t entry; > entry = pmd_mkyoung(orig_pmd); > - entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > + entry = pmd_mkdirty(entry); > + if (pmd_uffd_wp(orig_pmd)) > + /* > + * This can happen when an uffd-wp protected page is > + * copied due to enfornced COW. When it happens, we > + * need to keep the uffd-wp bit even after COW, and > + * make sure write bit is kept cleared. > + */ > + entry = pmd_mkuffd_wp(pmd_wrprotect(entry)); > + else > + entry = maybe_pmd_mkwrite(entry, vma); > if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) > update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); > unlock_page(page); > diff --git a/mm/memory.c b/mm/memory.c > index c39a13b09602..b27b555a9df8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2706,7 +2706,17 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) > flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); > entry = mk_pte(new_page, vma->vm_page_prot); > entry = pte_sw_mkyoung(entry); > - entry = maybe_mkwrite(pte_mkdirty(entry), vma); > + entry = pte_mkdirty(entry); > + if (pte_uffd_wp(vmf->orig_pte)) > + /* > + * This can happen when an uffd-wp protected page is > + * copied due to enfornced COW. When it happens, we > + * need to keep the uffd-wp bit even after COW, and > + * make sure write bit is kept cleared. > + */ > + entry = pte_mkuffd_wp(pte_wrprotect(entry)); > + else > + entry = maybe_mkwrite(entry, vma); > /* > * Clear the pte entry and flush it first, before updating the > * pte with the new entry. This will avoid a race condition I think this needs to be cleaned up some way. I realize it's not an exact duplicate (pmd vs pte), but this code is illegible. Maybe just making it a helper inline function (well, two separate ones) with the comment above the function would resolve my "this is very ugly" concerns. > @@ -2900,7 +2910,13 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > > - if (userfaultfd_pte_wp(vma, *vmf->pte)) { > + /* > + * Userfaultfd-wp only cares about real writes. E.g., enforced COW for > + * read does not count. When that happens, we will do the COW with the > + * UFFD_WP bit inherited from the original PTE/PMD. > + */ > + if ((vmf->flags & FAULT_FLAG_WRITE) && > + userfaultfd_pte_wp(vma, *vmf->pte)) { > pte_unmap_unlock(vmf->pte, vmf->ptl); > return handle_userfault(vmf, VM_UFFD_WP); > } > @@ -4117,7 +4133,14 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf) > static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd) > { > if (vma_is_anonymous(vmf->vma)) { > - if (userfaultfd_huge_pmd_wp(vmf->vma, orig_pmd)) > + /* > + * Userfaultfd-wp only cares about real writes. E.g., enforced > + * COW for read does not count. When that happens, we will do > + * the COW with the UFFD_WP bit inherited from the original > + * PTE/PMD. > + */ > + if ((vmf->flags & FAULT_FLAG_WRITE) && > + userfaultfd_huge_pmd_wp(vmf->vma, orig_pmd)) > return handle_userfault(vmf, VM_UFFD_WP); Here again the comment placement could be improved. Particularly in the do_wp_page() case, we have a big and somewhat complex function, and this duplicated boiler-plate makes me worry. Making it a helper function with a comment above would again I think make it more legible. And I think Jann is on the money wrt the follow_page_pte() issue. I think you broke COW break there entirely. That was one of the reasons I did just that "make it use FOLL_WRITE" originally, because it meant that we couldn't have any subtle places we'd missed. Now I wonder if there's any other case of FOLL_WRITE that is missing. Linus