From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 708DDC433E1 for ; Fri, 21 Aug 2020 23:50:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 285982076E for ; Fri, 21 Aug 2020 23:50:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HazPp7o8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 285982076E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D78E18D008F; Fri, 21 Aug 2020 19:50:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD8DB8D0008; Fri, 21 Aug 2020 19:50:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA39E8D008F; Fri, 21 Aug 2020 19:50:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id A5EB48D0008 for ; Fri, 21 Aug 2020 19:50:12 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6E712181AC9C6 for ; Fri, 21 Aug 2020 23:50:12 +0000 (UTC) X-FDA: 77176221864.04.spade50_32113772703d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 44A598010F9B for ; Fri, 21 Aug 2020 23:50:12 +0000 (UTC) X-HE-Tag: spade50_32113772703d X-Filterd-Recvd-Size: 9754 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Fri, 21 Aug 2020 23:50:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1598053810; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lZhKl54kX7dAPZ6/cV/ptJau3xXQpk6RfDHKjNDKypM=; b=HazPp7o8n507wsLkakJV5Z12KAqE+Emq9U1u/cRdqSyxDCg08ZojbmaoL9D1mb3PD3CQK0 iPn/pHBvkwhxUSMAHlb2f8L8vYy/6utpQz4gVWe20LhB4MGk1yg+B4PGcC8gnGztjkLkv8 Zyq/3CYp/XwzG6ogDUrzVj/oYYZxyck= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-9-SeLP4tOzO42PmB7cPU63gA-1; Fri, 21 Aug 2020 19:50:09 -0400 X-MC-Unique: SeLP4tOzO42PmB7cPU63gA-1 Received: by mail-qv1-f71.google.com with SMTP id j8so2322705qvu.3 for ; Fri, 21 Aug 2020 16:50:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lZhKl54kX7dAPZ6/cV/ptJau3xXQpk6RfDHKjNDKypM=; b=VIb+Fo27F+qwyTM6hbxsx7JWuFqyEG1gkYInsuv2E3no0bim10CnPLFCRCkqo9y6Ho /2kjp4VUJpPM3RZdqWXBFZHnK5g3/zBU9Zpd78b6WrBoywNoMJUfzabTBsB0W62F0Cmt mVXO07hSoQWbbAQ7nrX0EJU3gkU6BIfMa/T7ePBh/nr6e6Tlg8ueyA/3K30eJPKF/WsR HHTIowruYV0LOoDDIfmli8eVi4UCxZCPAE6DaXIHs9fr77LtoDtjwfrbu+LXy8KDWOwd NjV4+eDsaSRvioJkErc98sf+JAQ+Bx6+zWJm53TGvZss8ApZtZTpP9lz5kT75w3IxMue YbNQ== X-Gm-Message-State: AOAM531vxe/fTl3/N3KVNFxRG3ym6YBIATiw+EWWRIn/LejNpuRv+s1I d60rFFUl/fAn85DwnuWdiR8od2cXTIBv8m8EkbrofsJCXZQVFNXYdMYzK8O/Ww+vgBGxxYXccth CfOoDH4zapfE= X-Received: by 2002:a05:620a:2ee:: with SMTP id a14mr5117557qko.42.1598053807826; Fri, 21 Aug 2020 16:50:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzCp9W9gDh5acBlNIfiYG5rM0wjHTFslrPs84JBX5JFhkDSCN8tx21DaK1qtdFAUMH63KSNVQ== X-Received: by 2002:a05:620a:2ee:: with SMTP id a14mr5117525qko.42.1598053807522; Fri, 21 Aug 2020 16:50:07 -0700 (PDT) Received: from localhost.localdomain (bras-vprn-toroon474qw-lp130-11-70-53-122-15.dsl.bell.ca. [70.53.122.15]) by smtp.gmail.com with ESMTPSA id t69sm2821600qka.73.2020.08.21.16.50.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Aug 2020 16:50:06 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Maya B . Gokhale" , Linus Torvalds , Yang Shi , Marty Mcfadden , peterx@redhat.com, Kirill Shutemov , Oleg Nesterov , Jann Horn , Jan Kara , Kirill Tkhai , Andrea Arcangeli , Christoph Hellwig , Andrew Morton Subject: [PATCH 3/4] mm/gup: Remove enfornced COW mechanism Date: Fri, 21 Aug 2020 19:49:57 -0400 Message-Id: <20200821234958.7896-4-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200821234958.7896-1-peterx@redhat.com> References: <20200821234958.7896-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0.002 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 44A598010F9B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With the more strict (but greatly simplified) page reuse logic in do_wp_p= age(), we can savely go back to the world where cow is not enforced with writes. This (majorly) reverts commit 17839856fd588f4ab6b789f482ed3ffd7c403e1f. There're some context differences due to some changes later on around it: 2170ecfa7688 ("drm/i915: convert get_user_pages() --> pin_user_pages()"= , 2020-06-03) 376a34efa4ee ("mm/gup: refactor and de-duplicate gup_fast() code", 2020= -06-03) Some lines moved back and forth with those, but this revert patch should = have striped out and covered all the enforced cow bits anyways. Suggested-by: Linus Torvalds Signed-off-by: Peter Xu --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 8 ----- mm/gup.c | 40 +++------------------ mm/huge_memory.c | 7 ++-- 3 files changed, 9 insertions(+), 46 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/dr= m/i915/gem/i915_gem_userptr.c index 2c2bf24140c9..12b30075134a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -596,14 +596,6 @@ static int i915_gem_userptr_get_pages(struct drm_i91= 5_gem_object *obj) GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); - /* - * Using __get_user_pages_fast() with a read-only - * access is questionable. A read-only page may be - * COW-broken, and then this might end up giving - * the wrong side of the COW.. - * - * We may or may not care. - */ if (pvec) { /* defer to worker if malloc fails */ if (!i915_gem_object_is_readonly(obj)) diff --git a/mm/gup.c b/mm/gup.c index ae096ea7583f..bb93251194d8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -381,22 +381,13 @@ static int follow_pfn_pte(struct vm_area_struct *vm= a, unsigned long address, } =20 /* - * FOLL_FORCE or a forced COW break can write even to unwritable pte's, - * but only after we've gone through a COW cycle and they are dirty. + * FOLL_FORCE can write to even unwritable pte's, but only + * after we've gone through a COW cycle and they are dirty. */ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) { - return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte)); -} - -/* - * A (separate) COW fault might break the page the other way and - * get_user_pages() would return the page from what is now the wrong - * VM. So we need to force a COW break at GUP time even for reads. - */ -static inline bool should_force_cow_break(struct vm_area_struct *vma, un= signed int flags) -{ - return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN))= ; + return pte_write(pte) || + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); } =20 static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -1067,11 +1058,9 @@ static long __get_user_pages(struct mm_struct *mm, goto out; } if (is_vm_hugetlb_page(vma)) { - if (should_force_cow_break(vma, foll_flags)) - foll_flags |=3D FOLL_WRITE; i =3D follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, - foll_flags, locked); + gup_flags, locked); if (locked && *locked =3D=3D 0) { /* * We've got a VM_FAULT_RETRY @@ -1085,10 +1074,6 @@ static long __get_user_pages(struct mm_struct *mm, continue; } } - - if (should_force_cow_break(vma, foll_flags)) - foll_flags |=3D FOLL_WRITE; - retry: /* * If we have a pending SIGKILL, don't keep faulting pages and @@ -2689,19 +2674,6 @@ static int internal_get_user_pages_fast(unsigned l= ong start, int nr_pages, return -EFAULT; =20 /* - * The FAST_GUP case requires FOLL_WRITE even for pure reads, - * because get_user_pages() may need to cause an early COW in - * order to avoid confusing the normal COW routines. So only - * targets that are already writable are safe to do by just - * looking at the page tables. - * - * NOTE! With FOLL_FAST_ONLY we allow read-only gup_fast() here, - * because there is no slow path to fall back on. But you'd - * better be careful about possible COW pages - you'll get _a_ - * COW page, but not necessarily the one you intended to get - * depending on what COW event happens after this. COW may break - * the page copy in a random direction. - * * Disable interrupts. The nested form is used, in order to allow * full, general purpose use of this routine. * @@ -2714,8 +2686,6 @@ static int internal_get_user_pages_fast(unsigned lo= ng start, int nr_pages, */ if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && gup_fast_permitted(start, end))= { unsigned long fast_flags =3D gup_flags; - if (!(gup_flags & FOLL_FAST_ONLY)) - fast_flags |=3D FOLL_WRITE; =20 local_irq_save(flags); gup_pgd_range(addr, end, fast_flags, pages, &nr_pinned); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2ccff8472cd4..7ff29cc3d55c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1291,12 +1291,13 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *v= mf, pmd_t orig_pmd) } =20 /* - * FOLL_FORCE or a forced COW break can write even to unwritable pmd's, - * but only after we've gone through a COW cycle and they are dirty. + * FOLL_FORCE can write to even unwritable pmd's, but only + * after we've gone through a COW cycle and they are dirty. */ static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) { - return pmd_write(pmd) || ((flags & FOLL_COW) && pmd_dirty(pmd)); + return pmd_write(pmd) || + ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); } =20 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, --=20 2.26.2