From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A532C47089 for ; Thu, 27 May 2021 20:21:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA17D6054E for ; Thu, 27 May 2021 20:21:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA17D6054E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 524916B006C; Thu, 27 May 2021 16:21:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FBD66B006E; Thu, 27 May 2021 16:21:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C34D6B0070; Thu, 27 May 2021 16:21:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 00C716B006C for ; Thu, 27 May 2021 16:21:25 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7AC5C181AF5C7 for ; Thu, 27 May 2021 20:21:25 +0000 (UTC) X-FDA: 78188130930.14.060BAC3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 010FC2BD8 for ; Thu, 27 May 2021 20:21:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1622146884; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rsRTgtg4IeZXx632vvcSV9nqjRspSTIPhkPklzMJhFg=; b=DP32hyIQCRzPoW66oXlVZPXgt7tn1hQgQYaRDIJk+9BGEIYl/NrHh+WGYG8nUdiMIn1VZG WRbDsgy3kpFnQo+0SNdcA/GhLplE8I7h22cpGwm5gGXOSw9LERvARZKaG2Pd5goid9uPme q2H6jJ/Yq62Ry7f6kNEO/GpvEZdLjSM= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-346-BR7OMno7NpeqIYNSwycpmg-1; Thu, 27 May 2021 16:21:23 -0400 X-MC-Unique: BR7OMno7NpeqIYNSwycpmg-1 Received: by mail-qk1-f200.google.com with SMTP id o14-20020a05620a130eb02902ea53a6ef80so1345847qkj.6 for ; Thu, 27 May 2021 13:21:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=rsRTgtg4IeZXx632vvcSV9nqjRspSTIPhkPklzMJhFg=; b=IIoAuTS53AAEqRqhlcUVVmUj4fxkkpPXWgHBQUNCjsJsOhG4+syKzf7Gz3wmGYYmvi 6gyO/4t93d2eUsTg0PmA975yJRcL4+MjJAGT9kev1bnXtKk4O5wawK/cZ3YkYkbbSbzY 2vgL5Etx+CyxEL8OBJE8JGt/xeb+lf8o3sCpiMAeHrAMcdmcSZoueBuzArVHMJNlf7No JVY3CcFRe/CHCvuXSDiu60vUtGiL5XDxjCErN98vojEFm6LgvP58JxPgbKnADPpD6tbH Qr7N8yHQJLDrek+r2h3ItWRkQNKusn1+Ck4dz4bI9e1N3tnqzDdVrvDxwsgH4tyg8enS 23Dg== X-Gm-Message-State: AOAM533cdnxAl0erdPeNpOlWY7JphYCrJSUTTDsvh2kxVg61y1iO3umC 4dh70kk+UeHQXXIHiHjon+oDKM3dmiEErd06hXs7xRrjwfcY8blI9JVBnthHB27Jsro9U7IWMKK 19Q1aX7vp7pj9RUwVGsre9Xru7/jW2bbGitVhOlagR1fjWZcfY4R1KjCLp1ut X-Received: by 2002:ac8:7e95:: with SMTP id w21mr366850qtj.76.1622146882096; Thu, 27 May 2021 13:21:22 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzsu99SqphxaV5fS6RYnOwpaKyYPzTZnLgRLgGScTmj89eNlKWiyekef21U2Dc89wGhI0XgCQ== X-Received: by 2002:ac8:7e95:: with SMTP id w21mr366795qtj.76.1622146881599; Thu, 27 May 2021 13:21:21 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-72-184-145-4-219.dsl.bell.ca. [184.145.4.219]) by smtp.gmail.com with ESMTPSA id g18sm2062201qke.37.2021.05.27.13.21.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 May 2021 13:21:21 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrea Arcangeli , "Kirill A . Shutemov" , Axel Rasmussen , Nadav Amit , Hugh Dickins , Jerome Glisse , Jason Gunthorpe , peterx@redhat.com, Andrew Morton , Miaohe Lin , Mike Rapoport , Matthew Wilcox , Mike Kravetz Subject: [PATCH v3 05/27] mm/swap: Introduce the idea of special swap ptes Date: Thu, 27 May 2021 16:21:17 -0400 Message-Id: <20210527202117.30689-1-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210527201927.29586-1-peterx@redhat.com> References: <20210527201927.29586-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=DP32hyIQ; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf12.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 010FC2BD8 X-Stat-Signature: x89b73k3mqk4f8xdmb4zm91gu8cs1yw9 X-HE-Tag: 1622146873-207826 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We used to have special swap entries, like migration entries, hw-poison entries, device private entries, etc. Those "special swap entries" reside in the range that they need to be at = least swap entries first, and their types are decided by swp_type(entry). This patch introduces another idea called "special swap ptes". It's very easy to get confused against "special swap entries", but a spei= cal swap pte should never contain a swap entry at all. It means, it's illega= l to call pte_to_swp_entry() upon a special swap pte. Make the uffd-wp special pte to be the first special swap pte. Before this patch, is_swap_pte()=3D=3Dtrue means one of the below: (a.1) The pte has a normal swap entry (non_swap_entry()=3D=3Dfalse). = For example, when an anonymous page got swapped out. (a.2) The pte has a special swap entry (non_swap_entry()=3D=3Dtrue). = For example, a migration entry, a hw-poison entry, etc. After this patch, is_swap_pte()=3D=3Dtrue means one of the below, where c= ase (b) is added: (a) The pte contains a swap entry. (a.1) The pte has a normal swap entry (non_swap_entry()=3D=3Dfalse). = For example, when an anonymous page got swapped out. (a.2) The pte has a special swap entry (non_swap_entry()=3D=3Dtrue). = For example, a migration entry, a hw-poison entry, etc. (b) The pte does not contain a swap entry at all (so it cannot be passed into pte_to_swp_entry()). For example, uffd-wp special swap pte. Teach the whole mm core about this new idea. It's done by introducing an= other helper called pte_has_swap_entry() which stands for case (a.1) and (a.2). Before this patch, it will be the same as is_swap_pte() because there's n= o special swap pte yet. Now for most of the previous use of is_swap_entry(= ) in mm core, we'll need to use the new helper pte_has_swap_entry() instead, t= o make sure we won't try to parse a swap entry from a swap special pte (which do= es not contain a swap entry at all!). We either handle the swap special pte, or= it'll naturally use the default "else" paths. Warn properly (e.g., in do_swap_page()) when we see a special swap pte - = we should never call do_swap_page() upon those ptes, but just to bail out ea= rly if it happens. Signed-off-by: Peter Xu --- arch/arm64/kernel/mte.c | 2 +- fs/proc/task_mmu.c | 14 ++++++++------ include/linux/swapops.h | 39 ++++++++++++++++++++++++++++++++++++++- mm/gup.c | 2 +- mm/hmm.c | 2 +- mm/khugepaged.c | 11 ++++++++++- mm/madvise.c | 4 ++-- mm/memcontrol.c | 2 +- mm/memory.c | 7 +++++++ mm/migrate.c | 4 ++-- mm/mincore.c | 2 +- mm/mprotect.c | 2 +- mm/mremap.c | 2 +- mm/page_vma_mapped.c | 6 +++--- mm/swapfile.c | 2 +- 15 files changed, 78 insertions(+), 23 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 125a10e413e9..a6fd3fb3eacb 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -36,7 +36,7 @@ static void mte_sync_page_tags(struct page *page, pte_t= *ptep, bool check_swap) { pte_t old_pte =3D READ_ONCE(*ptep); =20 - if (check_swap && is_swap_pte(old_pte)) { + if (check_swap && pte_has_swap_entry(old_pte)) { swp_entry_t entry =3D pte_to_swp_entry(old_pte); =20 if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index eb97468dfe4c..9c5af77b5290 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -498,7 +498,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long= addr, =20 if (pte_present(*pte)) { page =3D vm_normal_page(vma, addr, *pte); - } else if (is_swap_pte(*pte)) { + } else if (pte_has_swap_entry(*pte)) { swp_entry_t swpent =3D pte_to_swp_entry(*pte); =20 if (!non_swap_entry(swpent)) { @@ -516,8 +516,10 @@ static void smaps_pte_entry(pte_t *pte, unsigned lon= g addr, } } else if (is_pfn_swap_entry(swpent)) page =3D pfn_swap_entry_to_page(swpent); - } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap - && pte_none(*pte))) { + } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && + mss->check_shmem_swap && + /* Here swap special pte is the same as none pte */ + (pte_none(*pte) || is_swap_special_pte(*pte)))) { page =3D xa_load(&vma->vm_file->f_mapping->i_pages, linear_page_index(vma, addr)); if (xa_is_value(page)) @@ -689,7 +691,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned l= ong hmask, =20 if (pte_present(*pte)) { page =3D vm_normal_page(vma, addr, *pte); - } else if (is_swap_pte(*pte)) { + } else if (pte_has_swap_entry(*pte)) { swp_entry_t swpent =3D pte_to_swp_entry(*pte); =20 if (is_pfn_swap_entry(swpent)) @@ -1071,7 +1073,7 @@ static inline void clear_soft_dirty(struct vm_area_= struct *vma, ptent =3D pte_wrprotect(old_pte); ptent =3D pte_clear_soft_dirty(ptent); ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); - } else if (is_swap_pte(ptent)) { + } else if (pte_has_swap_entry(ptent)) { ptent =3D pte_swp_clear_soft_dirty(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); } @@ -1374,7 +1376,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct = pagemapread *pm, flags |=3D PM_SOFT_DIRTY; if (pte_uffd_wp(pte)) flags |=3D PM_UFFD_WP; - } else if (is_swap_pte(pte)) { + } else if (pte_has_swap_entry(pte)) { swp_entry_t entry; if (pte_swp_soft_dirty(pte)) flags |=3D PM_SOFT_DIRTY; diff --git a/include/linux/swapops.h b/include/linux/swapops.h index af3d2661e41e..4a316c015fe0 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -5,6 +5,7 @@ #include #include #include +#include =20 #ifdef CONFIG_MMU =20 @@ -52,12 +53,48 @@ static inline pgoff_t swp_offset(swp_entry_t entry) return entry.val & SWP_OFFSET_MASK; } =20 -/* check whether a pte points to a swap entry */ +/* + * is_swap_pte() returns true for three cases: + * + * (a) The pte contains a swap entry. + * + * (a.1) The pte has a normal swap entry (non_swap_entry()=3D=3Dfalse)= . For + * example, when an anonymous page got swapped out. + * + * (a.2) The pte has a special swap entry (non_swap_entry()=3D=3Dtrue)= . For + * example, a migration entry, a hw-poison entry, etc. + * + * (b) The pte does not contain a swap entry at all (so it cannot be pas= sed + * into pte_to_swp_entry()). For example, uffd-wp special swap pte. + */ static inline int is_swap_pte(pte_t pte) { return !pte_none(pte) && !pte_present(pte); } =20 +/* + * A swap-like special pte should only be used as special marker to trig= ger a + * page fault. We should treat them similarly as pte_none() in most cas= es, + * except that it may contain some special information that can persist = within + * the pte. Currently the only special swap pte is UFFD_WP_SWP_PTE_SPEC= IAL. + * + * Note: we should never call pte_to_swp_entry() upon a special swap pte= , + * Because a swap special pte does not contain a swap entry! + */ +static inline bool is_swap_special_pte(pte_t pte) +{ + return pte_swp_uffd_wp_special(pte); +} + +/* + * Returns true if the pte contains a swap entry. This includes not onl= y the + * normal swp entry case, but also for migration entries, etc. + */ +static inline bool pte_has_swap_entry(pte_t pte) +{ + return is_swap_pte(pte) && !is_swap_special_pte(pte); +} + /* * Convert the arch-dependent pte representation of a swp_entry_t into a= n * arch-independent swp_entry_t. diff --git a/mm/gup.c b/mm/gup.c index 29a0c7d87024..e03590c9c68e 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -485,7 +485,7 @@ static struct page *follow_page_pte(struct vm_area_st= ruct *vma, */ if (likely(!(flags & FOLL_MIGRATION))) goto no_page; - if (pte_none(pte)) + if (!pte_has_swap_entry(pte)) goto no_page; entry =3D pte_to_swp_entry(pte); if (!is_migration_entry(entry)) diff --git a/mm/hmm.c b/mm/hmm.c index fad6be2bf072..aba1bf2c6742 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -239,7 +239,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, pte_t pte =3D *ptep; uint64_t pfn_req_flags =3D *hmm_pfn; =20 - if (pte_none(pte)) { + if (pte_none(pte) || is_swap_special_pte(pte)) { required_fault =3D hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0); if (required_fault) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b0412be08fa2..7376a9b5bfc9 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1018,7 +1018,7 @@ static bool __collapse_huge_page_swapin(struct mm_s= truct *mm, =20 vmf.pte =3D pte_offset_map(pmd, address); vmf.orig_pte =3D *vmf.pte; - if (!is_swap_pte(vmf.orig_pte)) { + if (!pte_has_swap_entry(vmf.orig_pte)) { pte_unmap(vmf.pte); continue; } @@ -1245,6 +1245,15 @@ static int khugepaged_scan_pmd(struct mm_struct *m= m, _pte++, _address +=3D PAGE_SIZE) { pte_t pteval =3D *_pte; if (is_swap_pte(pteval)) { + if (is_swap_special_pte(pteval)) { + /* + * Reuse SCAN_PTE_UFFD_WP. If there will be + * new users of is_swap_special_pte(), we'd + * better introduce a new result type. + */ + result =3D SCAN_PTE_UFFD_WP; + goto out_unmap; + } if (++unmapped <=3D khugepaged_max_ptes_swap) { /* * Always be strict with uffd-wp diff --git a/mm/madvise.c b/mm/madvise.c index 012129fbfaf8..ebde36d685ad 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -204,7 +204,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned= long start, pte =3D *(orig_pte + ((index - start) / PAGE_SIZE)); pte_unmap_unlock(orig_pte, ptl); =20 - if (pte_present(pte) || pte_none(pte)) + if (!pte_has_swap_entry(pte)) continue; entry =3D pte_to_swp_entry(pte); if (unlikely(non_swap_entry(entry))) @@ -596,7 +596,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigne= d long addr, for (; addr !=3D end; pte++, addr +=3D PAGE_SIZE) { ptent =3D *pte; =20 - if (pte_none(ptent)) + if (pte_none(ptent) || is_swap_special_pte(ptent)) continue; /* * If the pte has swp_entry, just clear page table to diff --git a/mm/memcontrol.c b/mm/memcontrol.c index cb864f87b01d..f684f6cf6fce 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5719,7 +5719,7 @@ static enum mc_target_type get_mctgt_type(struct vm= _area_struct *vma, =20 if (pte_present(ptent)) page =3D mc_handle_present_pte(vma, addr, ptent); - else if (is_swap_pte(ptent)) + else if (pte_has_swap_entry(ptent)) page =3D mc_handle_swap_pte(vma, ptent, &ent); else if (pte_none(ptent)) page =3D mc_handle_file_pte(vma, addr, ptent, &ent); diff --git a/mm/memory.c b/mm/memory.c index 0ccaae2647c0..2b24af4616df 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3445,6 +3445,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!pte_unmap_same(vmf)) goto out; =20 + /* + * We should never call do_swap_page upon a swap special pte; just be + * safe to bail out if it happens. + */ + if (WARN_ON_ONCE(is_swap_special_pte(vmf->orig_pte))) + goto out; + entry =3D pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (is_migration_entry(entry)) { diff --git a/mm/migrate.c b/mm/migrate.c index 91ee6f0941b4..2468c5d00f30 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -294,7 +294,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte= _t *ptep, =20 spin_lock(ptl); pte =3D *ptep; - if (!is_swap_pte(pte)) + if (!pte_has_swap_entry(pte)) goto out; =20 entry =3D pte_to_swp_entry(pte); @@ -2248,7 +2248,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, =20 pte =3D *ptep; =20 - if (pte_none(pte)) { + if (pte_none(pte) || is_swap_special_pte(pte)) { if (vma_is_anonymous(vma)) { mpfn =3D MIGRATE_PFN_MIGRATE; migrate->cpages++; diff --git a/mm/mincore.c b/mm/mincore.c index 9122676b54d6..5728c3e6473f 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -121,7 +121,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned lon= g addr, unsigned long end, for (; addr !=3D end; ptep++, addr +=3D PAGE_SIZE) { pte_t pte =3D *ptep; =20 - if (pte_none(pte)) + if (pte_none(pte) || is_swap_special_pte(pte)) __mincore_unmapped_range(addr, addr + PAGE_SIZE, vma, vec); else if (pte_present(pte)) diff --git a/mm/mprotect.c b/mm/mprotect.c index 883e2cc85cad..4b743394afbe 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -139,7 +139,7 @@ static unsigned long change_pte_range(struct vm_area_= struct *vma, pmd_t *pmd, } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); pages++; - } else if (is_swap_pte(oldpte)) { + } else if (pte_has_swap_entry(oldpte)) { swp_entry_t entry =3D pte_to_swp_entry(oldpte); pte_t newpte; =20 diff --git a/mm/mremap.c b/mm/mremap.c index b7523589f218..64cd6581e05a 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -125,7 +125,7 @@ static pte_t move_soft_dirty_pte(pte_t pte) #ifdef CONFIG_MEM_SOFT_DIRTY if (pte_present(pte)) pte =3D pte_mksoft_dirty(pte); - else if (is_swap_pte(pte)) + else if (pte_has_swap_entry(pte)) pte =3D pte_swp_mksoft_dirty(pte); #endif return pte; diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index f535bcb4950c..c2f9bcee2273 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -36,7 +36,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) * For more details on device private memory see HMM * (include/linux/hmm.h or mm/hmm.c). */ - if (is_swap_pte(*pvmw->pte)) { + if (pte_has_swap_entry(*pvmw->pte)) { swp_entry_t entry; =20 /* Handle un-addressable ZONE_DEVICE memory */ @@ -90,7 +90,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw= ) =20 if (pvmw->flags & PVMW_MIGRATION) { swp_entry_t entry; - if (!is_swap_pte(*pvmw->pte)) + if (!pte_has_swap_entry(*pvmw->pte)) return false; entry =3D pte_to_swp_entry(*pvmw->pte); =20 @@ -99,7 +99,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw= ) return false; =20 pfn =3D swp_offset(entry); - } else if (is_swap_pte(*pvmw->pte)) { + } else if (pte_has_swap_entry(*pvmw->pte)) { swp_entry_t entry; =20 /* Handle un-addressable ZONE_DEVICE memory */ diff --git a/mm/swapfile.c b/mm/swapfile.c index cbb4c0795284..2401b2a90443 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1972,7 +1972,7 @@ static int unuse_pte_range(struct vm_area_struct *v= ma, pmd_t *pmd, si =3D swap_info[type]; pte =3D pte_offset_map(pmd, addr); do { - if (!is_swap_pte(*pte)) + if (!pte_has_swap_entry(*pte)) continue; =20 entry =3D pte_to_swp_entry(*pte); --=20 2.31.1