From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6171DC433B4 for ; Tue, 27 Apr 2021 16:13:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC0DB613C2 for ; Tue, 27 Apr 2021 16:13:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC0DB613C2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D02A46B007B; Tue, 27 Apr 2021 12:13:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C7E936B0080; Tue, 27 Apr 2021 12:13:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 948B16B007D; Tue, 27 Apr 2021 12:13:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 5DEB76B007B for ; Tue, 27 Apr 2021 12:13:41 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1405745BC for ; Tue, 27 Apr 2021 16:13:41 +0000 (UTC) X-FDA: 78078642642.05.CE2C607 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 06AEC13A for ; Tue, 27 Apr 2021 16:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619540020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LT2kIK8sipp9j2ir2CNI9ANhHohLPp0GAzeCrbDSG6A=; b=YN/6o76feKPa1BNtiuyrVVJtTQmyZmO2u6Orh+pctn3/qMFtrm2qBtqtZPQrDh0oJCjrMY ylKbAfT1u71Nzfg55TO8UdmVWgomchxY/CjXEEXtPUBiekwlrhEdyHxVDpTMe9a1u3oXia fdRIcf0OCstmde8oFioArqe6pWDS0qQ= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-62-yw8-yt4AOx-V9rKzSZHCbw-1; Tue, 27 Apr 2021 12:13:38 -0400 X-MC-Unique: yw8-yt4AOx-V9rKzSZHCbw-1 Received: by mail-qk1-f200.google.com with SMTP id g76-20020a379d4f0000b02902e40532d832so17024935qke.20 for ; Tue, 27 Apr 2021 09:13:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LT2kIK8sipp9j2ir2CNI9ANhHohLPp0GAzeCrbDSG6A=; b=bXmtKdjoUCGkTJv+S/cxClPBie5ktgcJke7bL+m4Hc35iDeMJEdr+3FETni6jYDifG qXpLgGaNWUFohD2DjJ477cmWm+B1CfqdlACL9bZsSqRDDl+wOf4NiHJUwnq8dL32gjN4 CmdFxE/YE+IjADBZnbWGxuOrA7pXVA+f6B7j9R1Nb9NXNldc0vO9dhWsXpOxzmoW1zse +u1G2JdvvTVre0xIGCOOx4sgt3+LpSBezQTUUuYblzsGvXWf/3Ki7glqiQtFh/fbBN0c 7W14BbvyX3VCLRifystYwaClNz+Hk9k2dXZqNX5/RUi8Sr68Ot8VqSAPyzjAWXX15j4T OkaA== X-Gm-Message-State: AOAM531Ijjt2HcPB73YIQ2AhF22MiyGwuLvADl+M3q0i8eOzgvvs75XO E+XOfDVvcnIrMVaanPkq0iLu9NmdCMjxujM6AZjb3pZVCFNwwP3dftR+9sEXxHr92+B9sQLXKu4 EdqTGd2SNjRByWVKltrr0fy/Dvd7l62L4/h1JfAj5Bukw3L8qM4U6tWobZJvl X-Received: by 2002:a0c:bec3:: with SMTP id f3mr24412591qvj.49.1619540016946; Tue, 27 Apr 2021 09:13:36 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzBrjIhgyXc8ASqxD1DOx8zVV0xPaGMkLyYwUGO30ydezgxooODIclnDbUOAGjmE/pwb1KXEg== X-Received: by 2002:a0c:bec3:: with SMTP id f3mr24412535qvj.49.1619540016586; Tue, 27 Apr 2021 09:13:36 -0700 (PDT) Received: from xz-x1.redhat.com (bras-base-toroon474qw-grc-77-184-145-104-227.dsl.bell.ca. [184.145.104.227]) by smtp.gmail.com with ESMTPSA id v66sm3103621qkd.113.2021.04.27.09.13.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Apr 2021 09:13:36 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Miaohe Lin , Mike Rapoport , Andrea Arcangeli , Hugh Dickins , peterx@redhat.com, Jerome Glisse , Mike Kravetz , Jason Gunthorpe , Matthew Wilcox , Andrew Morton , Axel Rasmussen , "Kirill A . Shutemov" Subject: [PATCH v2 09/24] mm: Pass zap_flags into unmap_mapping_pages() Date: Tue, 27 Apr 2021 12:13:02 -0400 Message-Id: <20210427161317.50682-10-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210427161317.50682-1-peterx@redhat.com> References: <20210427161317.50682-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 06AEC13A X-Stat-Signature: rz9mmi1spdefbk4rnocmwcawhsahyq93 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf04; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619540016-846726 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Give unmap_mapping_pages() more power by allowing to specify a zap flag s= o that it can pass in more information than "whether we'd also like to zap cow p= ages". With the new flag, we can remove the even_cow parameter because even_cow=3D= =3Dfalse equals to zap_flags=3D=3DZAP_FLAG_CHECK_MAPPING, while even_cow=3D=3Dtrue= means a none zap flag to pass in (though in most cases we have had even_cow=3D=3Dfalse= ). No functional change intended. Signed-off-by: Peter Xu --- fs/dax.c | 10 ++++++---- include/linux/mm.h | 4 ++-- mm/khugepaged.c | 3 ++- mm/memory.c | 15 ++++++++------- mm/truncate.c | 11 +++++++---- 5 files changed, 25 insertions(+), 18 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 69216241392f2..20ca8d7d36ebb 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -517,7 +517,7 @@ static void *grab_mapping_entry(struct xa_state *xas, xas_unlock_irq(xas); unmap_mapping_pages(mapping, xas->xa_index & ~PG_PMD_COLOUR, - PG_PMD_NR, false); + PG_PMD_NR, ZAP_FLAG_CHECK_MAPPING); xas_reset(xas); xas_lock_irq(xas); } @@ -612,7 +612,8 @@ struct page *dax_layout_busy_page_range(struct addres= s_space *mapping, * guaranteed to either see new references or prevent new * references from being established. */ - unmap_mapping_pages(mapping, start_idx, end_idx - start_idx + 1, 0); + unmap_mapping_pages(mapping, start_idx, end_idx - start_idx + 1, + ZAP_FLAG_CHECK_MAPPING); =20 xas_lock_irq(&xas); xas_for_each(&xas, entry, end_idx) { @@ -743,9 +744,10 @@ static void *dax_insert_entry(struct xa_state *xas, /* we are replacing a zero page with block mapping */ if (dax_is_pmd_entry(entry)) unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR, - PG_PMD_NR, false); + PG_PMD_NR, ZAP_FLAG_CHECK_MAPPING); else /* pte entry */ - unmap_mapping_pages(mapping, index, 1, false); + unmap_mapping_pages(mapping, index, 1, + ZAP_FLAG_CHECK_MAPPING); } =20 xas_reset(xas); diff --git a/include/linux/mm.h b/include/linux/mm.h index 2227e9107e53e..b8aa81a064a55 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1784,7 +1784,7 @@ extern int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked); void unmap_mapping_pages(struct address_space *mapping, - pgoff_t start, pgoff_t nr, bool even_cows); + pgoff_t start, pgoff_t nr, unsigned long zap_flags); void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows); #else @@ -1804,7 +1804,7 @@ static inline int fixup_user_fault(struct mm_struct= *mm, unsigned long address, return -EFAULT; } static inline void unmap_mapping_pages(struct address_space *mapping, - pgoff_t start, pgoff_t nr, bool even_cows) { } + pgoff_t start, pgoff_t nr, unsigned long zap_flags) { } static inline void unmap_mapping_range(struct address_space *mapping, loff_t const holebegin, loff_t const holelen, int even_cows) { } #endif diff --git a/mm/khugepaged.c b/mm/khugepaged.c index e8b299aa32d06..64a36cd375359 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1831,7 +1831,8 @@ static void collapse_file(struct mm_struct *mm, } =20 if (page_mapped(page)) - unmap_mapping_pages(mapping, index, 1, false); + unmap_mapping_pages(mapping, index, 1, + ZAP_FLAG_CHECK_MAPPING); =20 xas_lock_irq(&xas); xas_set(&xas, index); diff --git a/mm/memory.c b/mm/memory.c index 5325c1c2cbd78..189f60853a51d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3224,7 +3224,10 @@ static inline void unmap_mapping_range_tree(struct= rb_root_cached *root, * @mapping: The address space containing pages to be unmapped. * @start: Index of first page to be unmapped. * @nr: Number of pages to be unmapped. 0 to unmap to end of file. - * @even_cows: Whether to unmap even private COWed pages. + * @zap_flags: Zap flags for the process. E.g., when ZAP_FLAG_CHECK_MAP= PING is + * passed into it, we will only zap the pages that are in the same map= ping + * specified in the @mapping parameter; otherwise we will not check ma= pping, + * IOW cow pages will be zapped too. * * Unmap the pages in this address space from any userspace process whic= h * has them mmaped. Generally, you want to remove COWed pages as well w= hen @@ -3232,17 +3235,14 @@ static inline void unmap_mapping_range_tree(struc= t rb_root_cached *root, * cache. */ void unmap_mapping_pages(struct address_space *mapping, pgoff_t start, - pgoff_t nr, bool even_cows) + pgoff_t nr, unsigned long zap_flags) { pgoff_t first_index =3D start, last_index =3D start + nr - 1; struct zap_details details =3D { .zap_mapping =3D mapping, - .zap_flags =3D ZAP_FLAG_SKIP_SWAP, + .zap_flags =3D zap_flags | ZAP_FLAG_SKIP_SWAP, }; =20 - if (!even_cows) - details.zap_flags |=3D ZAP_FLAG_CHECK_MAPPING; - if (last_index < first_index) last_index =3D ULONG_MAX; =20 @@ -3284,7 +3284,8 @@ void unmap_mapping_range(struct address_space *mapp= ing, hlen =3D ULONG_MAX - hba + 1; } =20 - unmap_mapping_pages(mapping, hba, hlen, even_cows); + unmap_mapping_pages(mapping, hba, hlen, even_cows ? + 0 : ZAP_FLAG_CHECK_MAPPING); } EXPORT_SYMBOL(unmap_mapping_range); =20 diff --git a/mm/truncate.c b/mm/truncate.c index 95af244b112a0..ba2cbe300e83e 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -172,7 +172,8 @@ truncate_cleanup_page(struct address_space *mapping, = struct page *page) { if (page_mapped(page)) { unsigned int nr =3D thp_nr_pages(page); - unmap_mapping_pages(mapping, page->index, nr, false); + unmap_mapping_pages(mapping, page->index, nr, + ZAP_FLAG_CHECK_MAPPING); } =20 if (page_has_private(page)) @@ -652,14 +653,15 @@ int invalidate_inode_pages2_range(struct address_sp= ace *mapping, * Zap the rest of the file in one hit. */ unmap_mapping_pages(mapping, index, - (1 + end - index), false); + (1 + end - index), + ZAP_FLAG_CHECK_MAPPING); did_range_unmap =3D 1; } else { /* * Just zap this page */ unmap_mapping_pages(mapping, index, - 1, false); + 1, ZAP_FLAG_CHECK_MAPPING); } } BUG_ON(page_mapped(page)); @@ -685,7 +687,8 @@ int invalidate_inode_pages2_range(struct address_spac= e *mapping, * get remapped later. */ if (dax_mapping(mapping)) { - unmap_mapping_pages(mapping, start, end - start + 1, false); + unmap_mapping_pages(mapping, start, end - start + 1, + ZAP_FLAG_CHECK_MAPPING); } out: cleancache_invalidate_inode(mapping); --=20 2.26.2