From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85CF9C4332F for ; Fri, 4 Feb 2022 19:59:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 998986B0092; Fri, 4 Feb 2022 14:59:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9034D6B0089; Fri, 4 Feb 2022 14:59:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D4CC6B0092; Fri, 4 Feb 2022 14:59:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id ED3D16B0088 for ; Fri, 4 Feb 2022 14:59:05 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AAEB6824C440 for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) X-FDA: 79106161050.09.3121AD8 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id 2BBFB1C000A for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=22k0IdJjYTE/UU7PUDOmfiOcwA1GEqXd/oV6kr4iAuM=; b=vHKUWCxsQLLGxY9G4MDxHnbCRV bmSIjRwSUenh9qeHgCsTOHuHB5FZ5axHQFbXgwcpCWy5OXyLkwwQnyZFE/s2ngIlASrGHdcMXjdD+ EmSGj75CzSs84Cuo/SwD0B/+x6bDX2ueMCgVQ8OpzsJXW8GdixZvnwPrTaCxD6Qf8Raak81VN7uV8 2b3ByWjP2VIKydzoQAGNYbTK4CHSP5yyrox9s8reqrdy7Nqo3B1/tFjMFTZj102W5kKU0EfTAXuSq 5dmQ9pK3bQENzwk/YmoNdBmaGVYRaCQmhWpL9PIh8pHYys2EsDumoQ36iEnF8CKsVl9XkNLVQ1zPm cANhkASA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jX-007Ln9-Ij; Fri, 04 Feb 2022 19:59:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 40/75] mm: Add pvmw_set_page() and pvmw_set_folio() Date: Fri, 4 Feb 2022 19:58:17 +0000 Message-Id: <20220204195852.1751729-41-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 2BBFB1C000A X-Stat-Signature: p99nu48a5uwdem896d63mj9cwesfd55y Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=vHKUWCxs; dmarc=none; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Rspam-User: nil X-HE-Tag: 1644004745-431141 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instead of setting the page directly in struct page_vma_mapped_walk, use this helper to allow us to transition to a PFN approach in the next patch. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 12 ++++++++++++ kernel/events/uprobes.c | 2 +- mm/damon/paddr.c | 4 ++-- mm/ksm.c | 2 +- mm/migrate.c | 2 +- mm/page_idle.c | 2 +- mm/rmap.c | 12 ++++++------ 7 files changed, 24 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e704b1a4c06c..e076aca3a203 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -213,6 +213,18 @@ struct page_vma_mapped_walk { unsigned int flags; }; =20 +static inline void pvmw_set_page(struct page_vma_mapped_walk *pvmw, + struct page *page) +{ + pvmw->page =3D page; +} + +static inline void pvmw_set_folio(struct page_vma_mapped_walk *pvmw, + struct folio *folio) +{ + pvmw->page =3D &folio->page; +} + static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk= *pvmw) { /* HugeTLB pte is set to the relevant page table entry without pte_mapp= ed. */ diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 6357c3580d07..5f74671b0066 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -156,13 +156,13 @@ static int __replace_page(struct vm_area_struct *vm= a, unsigned long addr, { struct mm_struct *mm =3D vma->vm_mm; struct page_vma_mapped_walk pvmw =3D { - .page =3D compound_head(old_page), .vma =3D vma, .address =3D addr, }; int err; struct mmu_notifier_range range; =20 + pvmw_set_page(&pvmw, compound_head(old_page)); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, addr + PAGE_SIZE); =20 diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 5e8244f65a1a..4e27d64abbb7 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -20,11 +20,11 @@ static bool __damon_pa_mkold(struct page *page, struc= t vm_area_struct *vma, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D addr, }; =20 + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr =3D pvmw.address; if (pvmw.pte) @@ -94,11 +94,11 @@ static bool __damon_pa_young(struct page *page, struc= t vm_area_struct *vma, { struct damon_pa_access_chk_result *result =3D arg; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D addr, }; =20 + pvmw_set_page(&pvmw, page); result->accessed =3D false; result->page_sz =3D PAGE_SIZE; while (page_vma_mapped_walk(&pvmw)) { diff --git a/mm/ksm.c b/mm/ksm.c index c20bd4d9a0d9..1639160c9e9a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1035,13 +1035,13 @@ static int write_protect_page(struct vm_area_stru= ct *vma, struct page *page, { struct mm_struct *mm =3D vma->vm_mm; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, }; int swapped; int err =3D -EFAULT; struct mmu_notifier_range range; =20 + pvmw_set_page(&pvmw, page); pvmw.address =3D page_address_in_vma(page, vma); if (pvmw.address =3D=3D -EFAULT) goto out; diff --git a/mm/migrate.c b/mm/migrate.c index c7da064b4781..07464fd45925 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -177,7 +177,6 @@ static bool remove_migration_pte(struct page *page, s= truct vm_area_struct *vma, unsigned long addr, void *old) { struct page_vma_mapped_walk pvmw =3D { - .page =3D old, .vma =3D vma, .address =3D addr, .flags =3D PVMW_SYNC | PVMW_MIGRATION, @@ -187,6 +186,7 @@ static bool remove_migration_pte(struct page *page, s= truct vm_area_struct *vma, swp_entry_t entry; =20 VM_BUG_ON_PAGE(PageTail(page), page); + pvmw_set_page(&pvmw, old); while (page_vma_mapped_walk(&pvmw)) { if (PageKsm(page)) new =3D page; diff --git a/mm/page_idle.c b/mm/page_idle.c index edead6a8a5f9..20d35d720872 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -49,12 +49,12 @@ static bool page_idle_clear_pte_refs_one(struct page = *page, unsigned long addr, void *arg) { struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D addr, }; bool referenced =3D false; =20 + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { addr =3D pvmw.address; if (pvmw.pte) { diff --git a/mm/rmap.c b/mm/rmap.c index a531b64d53fa..fa8478372e94 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -803,12 +803,12 @@ static bool page_referenced_one(struct page *page, = struct vm_area_struct *vma, { struct page_referenced_arg *pra =3D arg; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, }; int referenced =3D 0; =20 + pvmw_set_page(&pvmw, page); while (page_vma_mapped_walk(&pvmw)) { address =3D pvmw.address; =20 @@ -932,7 +932,6 @@ static bool page_mkclean_one(struct page *page, struc= t vm_area_struct *vma, unsigned long address, void *arg) { struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, .flags =3D PVMW_SYNC, @@ -940,6 +939,7 @@ static bool page_mkclean_one(struct page *page, struc= t vm_area_struct *vma, struct mmu_notifier_range range; int *cleaned =3D arg; =20 + pvmw_set_page(&pvmw, page); /* * We have to assume the worse case ie pmd for invalidation. Note that * the page can not be free from this function. @@ -1423,7 +1423,6 @@ static bool try_to_unmap_one(struct page *page, str= uct vm_area_struct *vma, { struct mm_struct *mm =3D vma->vm_mm; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, }; @@ -1433,6 +1432,7 @@ static bool try_to_unmap_one(struct page *page, str= uct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags =3D (enum ttu_flags)(long)arg; =20 + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -1723,7 +1723,6 @@ static bool try_to_migrate_one(struct page *page, s= truct vm_area_struct *vma, { struct mm_struct *mm =3D vma->vm_mm; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, }; @@ -1733,6 +1732,7 @@ static bool try_to_migrate_one(struct page *page, s= truct vm_area_struct *vma, struct mmu_notifier_range range; enum ttu_flags flags =3D (enum ttu_flags)(long)arg; =20 + pvmw_set_page(&pvmw, page); /* * When racing against e.g. zap_pte_range() on another cpu, * in between its ptep_get_and_clear_full() and page_remove_rmap(), @@ -2003,11 +2003,11 @@ static bool page_mlock_one(struct page *page, str= uct vm_area_struct *vma, unsigned long address, void *unused) { struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, }; =20 + pvmw_set_page(&pvmw, page); /* An un-locked vma doesn't have any pages to lock, continue the scan *= / if (!(vma->vm_flags & VM_LOCKED)) return true; @@ -2078,7 +2078,6 @@ static bool page_make_device_exclusive_one(struct p= age *page, { struct mm_struct *mm =3D vma->vm_mm; struct page_vma_mapped_walk pvmw =3D { - .page =3D page, .vma =3D vma, .address =3D address, }; @@ -2090,6 +2089,7 @@ static bool page_make_device_exclusive_one(struct p= age *page, swp_entry_t entry; pte_t swp_pte; =20 + pvmw_set_page(&pvmw, page); mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, address, min(vma->vm_end, address + page_size(page)), args->owner); --=20 2.34.1