From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07DB1C433ED for ; Wed, 7 Apr 2021 08:43:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5DE2A61154 for ; Wed, 7 Apr 2021 08:43:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5DE2A61154 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED9616B007E; Wed, 7 Apr 2021 04:43:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAE966B0080; Wed, 7 Apr 2021 04:43:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D051A6B0081; Wed, 7 Apr 2021 04:43:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id AF9366B007E for ; Wed, 7 Apr 2021 04:43:03 -0400 (EDT) Received: from smtpin34.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6E15782499A8 for ; Wed, 7 Apr 2021 08:43:03 +0000 (UTC) X-FDA: 78004931046.34.6EB2B00 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2050.outbound.protection.outlook.com [40.107.244.50]) by imf06.hostedemail.com (Postfix) with ESMTP id D91F9C0007CD for ; Wed, 7 Apr 2021 08:43:03 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h1pzSxPYYONhCHTfEp/jQMCkKmxwPOgBvAEc15LJ5xRunhnbEAyxJWXuWYP6mu3YxaUSWiw23wYRpP5TGlPHDQPK8YR76+qlQ6twuACtF5yZQ6MFZquHxeZF68tYEuo4LGnOmAfx8YzS2uwmfZaU7Pkpb+mIfW2BrK7Vb3G7ztip7utFx5pMKE/d0mEqmEw4UrSw/wuR8fFsRVfMvxvpgOR5a8z9oux322elhkyVHRyI2mPUBd/3zZNnHgn1Orp6EFfq12VRJ8ScUxJ7ELqmi627riFJvLwMJcu2AF92VA5+cn8UK72q+YHfdwXIsJreLypQt9VqMeyq6jniO+CH2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Mp3B6g43f+Fey6EwEsxIHQikhTKAxQ7E1UC4URzdp+g=; b=j1UYq+msal+V+q7r2XEe7gLgamr4me80OJzfV/YyhiCNsq3ggnMy2sQ7G3MNkYBjMehnd4CljwGvqcz9yl5rZQGQ3UcspaVgkC4KR7YWJ2cPlweNK04vlH/lFFWvN6VMk1COWX8ja1CudOpIOUhU7R4hzvlH25XECahjmKGDIyDMIPuA4iPlUC79674gRRt6FBbMUwr2STNS7cjzfS7JUX/0WYYYyLuaPEYXjzGMwjeMV6rywFTzXODpHwOP2BMqqyWzueBS9tR7W+ow56Alfa5bLxpjfDIUw140+Q4bzbKKCAlLNkFvuYWeTtff/Ebkcw86o869mWMA3PXySLDovA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=ffwll.ch smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Mp3B6g43f+Fey6EwEsxIHQikhTKAxQ7E1UC4URzdp+g=; b=oj2FFsY1x6BB1pGyESCCd3QT+6V8U++b/uPgRmIpCq7+VuOH/d6olYelLcUwVpPZtzeI6PaTTFghJ1kIcam+/z5J4Xj+rBSdtWXfSlFa2kLy2uXZWay7FviPqgl9UzEsqEkYfvbil6m9BbIM2CfnFdlRo6dR0LwQ1Zc8vaBQ1dTOkYW3QhNQqTw/LujakWGyb2ZiVHvsxKzOZh8xu4Ow5bGlNTYeJuWBU1sLCtNb+QUFA4msadWSjYQwIPMkLNDKBnviLqP8ITqi8q2/eFiDMbbH17ZAQMH16J4OqvTpPhJmjDxhwH1Iygnfx3KcEIOGKHs1Xx40eUpVPE+mev3apQ== Received: from BN9PR03CA0602.namprd03.prod.outlook.com (2603:10b6:408:106::7) by DM4PR12MB5328.namprd12.prod.outlook.com (2603:10b6:5:39f::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.27; Wed, 7 Apr 2021 08:43:00 +0000 Received: from BN8NAM11FT065.eop-nam11.prod.protection.outlook.com (2603:10b6:408:106:cafe::50) by BN9PR03CA0602.outlook.office365.com (2603:10b6:408:106::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.17 via Frontend Transport; Wed, 7 Apr 2021 08:43:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; ffwll.ch; dkim=none (message not signed) header.d=none;ffwll.ch; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT065.mail.protection.outlook.com (10.13.177.63) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4020.17 via Frontend Transport; Wed, 7 Apr 2021 08:43:00 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 7 Apr 2021 08:42:58 +0000 From: Alistair Popple To: , , , CC: Alistair Popple , , , , , , , , , , , , Christoph Hellwig Subject: [PATCH v8 1/8] mm: Remove special swap entry functions Date: Wed, 7 Apr 2021 18:42:31 +1000 Message-ID: <20210407084238.20443-2-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210407084238.20443-1-apopple@nvidia.com> References: <20210407084238.20443-1-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 2128ac77-9f49-4a77-8164-08d8f9a1267f X-MS-TrafficTypeDiagnostic: DM4PR12MB5328: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:56; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 7CDTpmIDu83GM7PAJ5T1/0R0seAAakExjuF0/4nHyfgtGbV+i3EgQiA4Zlgge9YE0Dbd1t46eG2bU4HwomuvVncNpqsmvzEMSGIYVs0HfMFVmZZ19fkjaPJoJlQJWlk7b5dh+r5D74YoajsjaQc4tzxC7JaOMNryxT4qrfp/pAXX9XV8H8bhZp0LXhIGVT686AdGiTgbfm7BhhdTVMKLbiNwrVvqeQdYghDLcMOW9Akr5H1vCmdtCXCXzXG5dVYP9EDBEUQRdpQ9Gvmyb02L4pm/hvafLtmjDSIQNkF8R3HwJB7Ca0L8ZantesrXWBaJKYDW9CdNhP69CI7oDKm/XKut+qrcbanAPG73isIDnOxj5TQ5V5wPyTtS45z5HSOFRpEXpAUhR9RTgoL6/fPOidJfD9hXLpK2LJM8WjsOImgQwfBP8HaputLlcKrgLo5XaN4uPCtMI9BdqR1HVngHqe+739qFwZsSXdpBgTJ669x57QsdE2EsXAdm+AMaPhSBF2MqeW3Lou7gVgwUrzkzlzOUrRhsi5hEHVWNXDQCdZB2+TF+FNZh3pn5Iku5lwK4/Q68To1Cnzp/aZriUjOsAzymUr4+BiEJQCO/fNpGS0ehNAiV+IzOCLy9QZQrX+anqfmgadRDyxd2BynCt49rCA== X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966006)(36840700001)(86362001)(70206006)(4326008)(2616005)(70586007)(36906005)(1076003)(336012)(82310400003)(426003)(7416002)(8936002)(316002)(47076005)(6666004)(110136005)(5660300002)(83380400001)(478600001)(54906003)(356005)(7636003)(2906002)(36860700001)(82740400003)(30864003)(36756003)(186003)(26005)(16526019)(8676002);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Apr 2021 08:43:00.1487 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2128ac77-9f49-4a77-8164-08d8f9a1267f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT065.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5328 X-Rspamd-Queue-Id: D91F9C0007CD X-Stat-Signature: 74umqn6wh8y4uby3cm57tqeqka4z3jgz X-Rspamd-Server: rspam02 Received-SPF: none (nvidia.com>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=NAM12-MW2-obe.outbound.protection.outlook.com; client-ip=40.107.244.50 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617784983-231639 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove multiple similar inline functions for dealing with different types of special swap entries. Both migration and device private swap entries use the swap offset to store a pfn. Instead of multiple inline functions to obtain a struct page for each swap entry type use a common function pfn_swap_entry_to_page(). Also open-code the various entry_to_pfn() functions as this results is shorter code that is easier to understand. Signed-off-by: Alistair Popple Reviewed-by: Ralph Campbell Reviewed-by: Christoph Hellwig --- v7: * Reworded commit message to include pfn_swap_entry_to_page() * Added Christoph's Reviewed-by v6: * Removed redundant compound_page() call from inside PageLocked() * Fixed a minor build issue for s390 reported by kernel test bot v4: * Added pfn_swap_entry_to_page() * Reinstated check that migration entries point to locked pages * Removed #define swapcache_prepare which isn't needed for CONFIG_SWAP=3D= 0 builds --- arch/s390/mm/pgtable.c | 2 +- fs/proc/task_mmu.c | 23 +++++--------- include/linux/swap.h | 4 +-- include/linux/swapops.h | 69 ++++++++++++++--------------------------- mm/hmm.c | 5 ++- mm/huge_memory.c | 4 +-- mm/memcontrol.c | 2 +- mm/memory.c | 10 +++--- mm/migrate.c | 6 ++-- mm/page_vma_mapped.c | 6 ++-- 10 files changed, 50 insertions(+), 81 deletions(-) diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c index 18205f851c24..eec3a9d7176e 100644 --- a/arch/s390/mm/pgtable.c +++ b/arch/s390/mm/pgtable.c @@ -691,7 +691,7 @@ static void ptep_zap_swap_entry(struct mm_struct *mm,= swp_entry_t entry) if (!non_swap_entry(entry)) dec_mm_counter(mm, MM_SWAPENTS); else if (is_migration_entry(entry)) { - struct page *page =3D migration_entry_to_page(entry); + struct page *page =3D pfn_swap_entry_to_page(entry); =20 dec_mm_counter(mm, mm_counter(page)); } diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3cec6fbef725..08ee59d945c0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -514,10 +514,8 @@ static void smaps_pte_entry(pte_t *pte, unsigned lon= g addr, } else { mss->swap_pss +=3D (u64)PAGE_SIZE << PSS_SHIFT; } - } else if (is_migration_entry(swpent)) - page =3D migration_entry_to_page(swpent); - else if (is_device_private_entry(swpent)) - page =3D device_private_entry_to_page(swpent); + } else if (is_pfn_swap_entry(swpent)) + page =3D pfn_swap_entry_to_page(swpent); } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap && pte_none(*pte))) { page =3D xa_load(&vma->vm_file->f_mapping->i_pages, @@ -549,7 +547,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long= addr, swp_entry_t entry =3D pmd_to_swp_entry(*pmd); =20 if (is_migration_entry(entry)) - page =3D migration_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); } if (IS_ERR_OR_NULL(page)) return; @@ -691,10 +689,8 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned = long hmask, } else if (is_swap_pte(*pte)) { swp_entry_t swpent =3D pte_to_swp_entry(*pte); =20 - if (is_migration_entry(swpent)) - page =3D migration_entry_to_page(swpent); - else if (is_device_private_entry(swpent)) - page =3D device_private_entry_to_page(swpent); + if (is_pfn_swap_entry(swpent)) + page =3D pfn_swap_entry_to_page(swpent); } if (page) { int mapcount =3D page_mapcount(page); @@ -1383,11 +1379,8 @@ static pagemap_entry_t pte_to_pagemap_entry(struct= pagemapread *pm, frame =3D swp_type(entry) | (swp_offset(entry) << MAX_SWAPFILES_SHIFT); flags |=3D PM_SWAP; - if (is_migration_entry(entry)) - page =3D migration_entry_to_page(entry); - - if (is_device_private_entry(entry)) - page =3D device_private_entry_to_page(entry); + if (is_pfn_swap_entry(entry)) + page =3D pfn_swap_entry_to_page(entry); } =20 if (page && !PageAnon(page)) @@ -1444,7 +1437,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned = long addr, unsigned long end, if (pmd_swp_soft_dirty(pmd)) flags |=3D PM_SOFT_DIRTY; VM_BUG_ON(!is_pmd_migration_entry(pmd)); - page =3D migration_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); } #endif =20 diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..516104b9334b 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -523,8 +523,8 @@ static inline void show_swap_cache_info(void) { } =20 -#define free_swap_and_cache(e) ({(is_migration_entry(e) || is_device_pri= vate_entry(e));}) -#define swapcache_prepare(e) ({(is_migration_entry(e) || is_device_priva= te_entry(e));}) +/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=3D0 */ +#define free_swap_and_cache(e) is_pfn_swap_entry(e) =20 static inline int add_swap_count_continuation(swp_entry_t swp, gfp_t gfp= _mask) { diff --git a/include/linux/swapops.h b/include/linux/swapops.h index d9b7c9132c2f..139be8235ad2 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -121,16 +121,6 @@ static inline bool is_write_device_private_entry(swp= _entry_t entry) { return unlikely(swp_type(entry) =3D=3D SWP_DEVICE_WRITE); } - -static inline unsigned long device_private_entry_to_pfn(swp_entry_t entr= y) -{ - return swp_offset(entry); -} - -static inline struct page *device_private_entry_to_page(swp_entry_t entr= y) -{ - return pfn_to_page(swp_offset(entry)); -} #else /* CONFIG_DEVICE_PRIVATE */ static inline swp_entry_t make_device_private_entry(struct page *page, b= ool write) { @@ -150,16 +140,6 @@ static inline bool is_write_device_private_entry(swp= _entry_t entry) { return false; } - -static inline unsigned long device_private_entry_to_pfn(swp_entry_t entr= y) -{ - return 0; -} - -static inline struct page *device_private_entry_to_page(swp_entry_t entr= y) -{ - return NULL; -} #endif /* CONFIG_DEVICE_PRIVATE */ =20 #ifdef CONFIG_MIGRATION @@ -182,22 +162,6 @@ static inline int is_write_migration_entry(swp_entry= _t entry) return unlikely(swp_type(entry) =3D=3D SWP_MIGRATION_WRITE); } =20 -static inline unsigned long migration_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - -static inline struct page *migration_entry_to_page(swp_entry_t entry) -{ - struct page *p =3D pfn_to_page(swp_offset(entry)); - /* - * Any use of migration entries may only occur while the - * corresponding page is locked - */ - BUG_ON(!PageLocked(compound_head(p))); - return p; -} - static inline void make_migration_entry_read(swp_entry_t *entry) { *entry =3D swp_entry(SWP_MIGRATION_READ, swp_offset(*entry)); @@ -217,16 +181,6 @@ static inline int is_migration_entry(swp_entry_t swp= ) return 0; } =20 -static inline unsigned long migration_entry_to_pfn(swp_entry_t entry) -{ - return 0; -} - -static inline struct page *migration_entry_to_page(swp_entry_t entry) -{ - return NULL; -} - static inline void make_migration_entry_read(swp_entry_t *entryp) { } static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *p= tep, spinlock_t *ptl) { } @@ -241,6 +195,29 @@ static inline int is_write_migration_entry(swp_entry= _t entry) =20 #endif =20 +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +{ + struct page *p =3D pfn_to_page(swp_offset(entry)); + + /* + * Any use of migration entries may only occur while the + * corresponding page is locked + */ + BUG_ON(is_migration_entry(entry) && !PageLocked(p)); + + return p; +} + +/* + * A pfn swap entry is a special type of swap entry that always has a pf= n stored + * in the swap offset. They are used to represent unaddressable device m= emory + * and to restrict access to a page undergoing migration. + */ +static inline bool is_pfn_swap_entry(swp_entry_t entry) +{ + return is_migration_entry(entry) || is_device_private_entry(entry); +} + struct page_vma_mapped_walk; =20 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/hmm.c b/mm/hmm.c index 943cb2ba4442..3b2dda71d0ed 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -214,7 +214,7 @@ static inline bool hmm_is_device_private_entry(struct= hmm_range *range, swp_entry_t entry) { return is_device_private_entry(entry) && - device_private_entry_to_page(entry)->pgmap->owner =3D=3D + pfn_swap_entry_to_page(entry)->pgmap->owner =3D=3D range->dev_private_owner; } =20 @@ -257,8 +257,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, cpu_flags =3D HMM_PFN_VALID; if (is_write_device_private_entry(entry)) cpu_flags |=3D HMM_PFN_WRITE; - *hmm_pfn =3D device_private_entry_to_pfn(entry) | - cpu_flags; + *hmm_pfn =3D swp_offset(entry) | cpu_flags; return 0; } =20 diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 395c75111d33..a4cda8564bcf 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1700,7 +1700,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, =20 VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry =3D pmd_to_swp_entry(orig_pmd); - page =3D pfn_to_page(swp_offset(entry)); + page =3D pfn_swap_entry_to_page(entry); flush_needed =3D 0; } else WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); @@ -2108,7 +2108,7 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, swp_entry_t entry; =20 entry =3D pmd_to_swp_entry(old_pmd); - page =3D pfn_to_page(swp_offset(entry)); + page =3D pfn_swap_entry_to_page(entry); write =3D is_write_migration_entry(entry); young =3D false; soft_dirty =3D pmd_swp_soft_dirty(old_pmd); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 845eec01ef9d..043840dbe48a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5523,7 +5523,7 @@ static struct page *mc_handle_swap_pte(struct vm_ar= ea_struct *vma, * as special swap entry in the CPU page table. */ if (is_device_private_entry(ent)) { - page =3D device_private_entry_to_page(ent); + page =3D pfn_swap_entry_to_page(ent); /* * MEMORY_DEVICE_PRIVATE means ZONE_DEVICE page and which have * a refcount of 1 when free (unlike normal page) diff --git a/mm/memory.c b/mm/memory.c index c8e357627318..1c98e3c1c2de 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -730,7 +730,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct = mm_struct *src_mm, } rss[MM_SWAPENTS]++; } else if (is_migration_entry(entry)) { - page =3D migration_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); =20 rss[mm_counter(page)]++; =20 @@ -749,7 +749,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct = mm_struct *src_mm, set_pte_at(src_mm, addr, src_pte, pte); } } else if (is_device_private_entry(entry)) { - page =3D device_private_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); =20 /* * Update rss count even for unaddressable pages, as @@ -1286,7 +1286,7 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, =20 entry =3D pte_to_swp_entry(ptent); if (is_device_private_entry(entry)) { - struct page *page =3D device_private_entry_to_page(entry); + struct page *page =3D pfn_swap_entry_to_page(entry); =20 if (unlikely(details && details->check_mapping)) { /* @@ -1315,7 +1315,7 @@ static unsigned long zap_pte_range(struct mmu_gathe= r *tlb, else if (is_migration_entry(entry)) { struct page *page; =20 - page =3D migration_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); rss[mm_counter(page)]--; } if (unlikely(!free_swap_and_cache(entry))) @@ -3282,7 +3282,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) migration_entry_wait(vma->vm_mm, vmf->pmd, vmf->address); } else if (is_device_private_entry(entry)) { - vmf->page =3D device_private_entry_to_page(entry); + vmf->page =3D pfn_swap_entry_to_page(entry); ret =3D vmf->page->pgmap->ops->migrate_to_ram(vmf); } else if (is_hwpoison_entry(entry)) { ret =3D VM_FAULT_HWPOISON; diff --git a/mm/migrate.c b/mm/migrate.c index 62b81d5257aa..600978d18750 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -321,7 +321,7 @@ void __migration_entry_wait(struct mm_struct *mm, pte= _t *ptep, if (!is_migration_entry(entry)) goto out; =20 - page =3D migration_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); =20 /* * Once page cache replacement of page migration started, page_count @@ -361,7 +361,7 @@ void pmd_migration_entry_wait(struct mm_struct *mm, p= md_t *pmd) ptl =3D pmd_lock(mm, pmd); if (!is_pmd_migration_entry(*pmd)) goto unlock; - page =3D migration_entry_to_page(pmd_to_swp_entry(*pmd)); + page =3D pfn_swap_entry_to_page(pmd_to_swp_entry(*pmd)); if (!get_page_unless_zero(page)) goto unlock; spin_unlock(ptl); @@ -2443,7 +2443,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, if (!is_device_private_entry(entry)) goto next; =20 - page =3D device_private_entry_to_page(entry); + page =3D pfn_swap_entry_to_page(entry); if (!(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || page->pgmap->owner !=3D migrate->pgmap_owner) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 86e3a3688d59..eed988ab2e81 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw= ) if (!is_migration_entry(entry)) return false; =20 - pfn =3D migration_entry_to_pfn(entry); + pfn =3D swp_offset(entry); } else if (is_swap_pte(*pvmw->pte)) { swp_entry_t entry; =20 @@ -105,7 +105,7 @@ static bool check_pte(struct page_vma_mapped_walk *pv= mw) if (!is_device_private_entry(entry)) return false; =20 - pfn =3D device_private_entry_to_pfn(entry); + pfn =3D swp_offset(entry); } else { if (!pte_present(*pvmw->pte)) return false; @@ -200,7 +200,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk= *pvmw) if (is_migration_entry(pmd_to_swp_entry(*pvmw->pmd))) { swp_entry_t entry =3D pmd_to_swp_entry(*pvmw->pmd); =20 - if (migration_entry_to_page(entry) !=3D page) + if (pfn_swap_entry_to_page(entry) !=3D page) return not_found(pvmw); return true; } --=20 2.20.1