From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 702F3C48BCF for ; Mon, 7 Jun 2021 07:59:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED52A6121D for ; Mon, 7 Jun 2021 07:59:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED52A6121D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 907D06B0070; Mon, 7 Jun 2021 03:59:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E00C6B0071; Mon, 7 Jun 2021 03:59:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 731E36B0072; Mon, 7 Jun 2021 03:59:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 34C326B0070 for ; Mon, 7 Jun 2021 03:59:24 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BE44FA8E8 for ; Mon, 7 Jun 2021 07:59:23 +0000 (UTC) X-FDA: 78226177806.05.2A21CA2 Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2085.outbound.protection.outlook.com [40.107.96.85]) by imf08.hostedemail.com (Postfix) with ESMTP id 1DB8D8019358 for ; Mon, 7 Jun 2021 07:59:21 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M0DnBjQmmPzz5GNimR3tqTmPqySycj+jIaR/pMkYyQ4pNQPetL1XDYoENr1d9I5KZABhQT8FuQwwO6ne4S3IrkHjhwCi2OzSaYbrFCQ0ufdJP8wl3V9iQg8Z0qfuyfpXRKZAzFOB3Y1MhzSYlu5dgvWU3rQ2NlSfzYNpNkce+aFLJnem1TO4ME9k8iscy/guiv9kzF686gM0mYSdMD0ITPGiMdlb3S84pnmsZjr/kpJcm3WeqA6DrB7YhPUoZr0JGRDd4T+uvy0JIhaSGrOI+HvsplQAq075tBBi+1nb5o32X6b8qqZOW80NbyfkvTidgad+sdI8VIKpb5GvrbSsag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YSuq4ufLRfv5d+Th6OiZodcm8X98GQSSK+KHkmIkKwA=; b=LafFZRnJ9MkE30o2xcFqsZdz/gzTlPRm2XEq4TPdByABgr/dS3LDwIJhAE4beB6yc036yvqPxm3dY8Rj9SzXGt3S+in5HsKMtyeMafjOjp16be6BzyAa4I5VINAy1vgfk4DIMQPkLs1wCF2Pp9OaLSauenGswCJv17gsVj6Mt/N/T9duNEm53eb7I1nyLRwn5vUkf2cCiVEiakokMooHuLykDdmcU2Qb8rZeJJQ5Z3lRT8DwB3mLYl9RHfIKgeaAtdOqjpcfk7n6w8sYhaa4T1msGStbVucxN7DLRFAb9CYao1sXdpZbyHtwVw5DKPC6WpD8DPeYkNbb5EVO0QgXoA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=redhat.com smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YSuq4ufLRfv5d+Th6OiZodcm8X98GQSSK+KHkmIkKwA=; b=e8mLsDO/1EytoOD8CWLmx3MHKoZ58y5N7JI8mpRt4ZItXbMvgECraL0R+lSlENbVzKl83tNtNJdfmElg3TUjT1kni5AV6VpZltqMUZ5xti3NnucFVpMBRxGENdODLgL4Rs36OJUWulNnndHKNLL4XCkLz5mh+bUGnCNLftIJjlOB9fHt3cWlIzyKyWFkbhPYb+/J+sOKzSIgCBaK9whTHt13a3rIGCbm/O2A3wiU98xaXgC3WE74+vfcNbpUrFPJQQteHMhpXVTXlwQ3SsFE/uHYsvA+V61WmOq4BNqhc5qPPczB0nghy2seNq3oAMWEFKMT2BXXslxf2HYT/qnvSg== Received: from DM6PR14CA0042.namprd14.prod.outlook.com (2603:10b6:5:18f::19) by BY5PR12MB4114.namprd12.prod.outlook.com (2603:10b6:a03:20c::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.24; Mon, 7 Jun 2021 07:59:21 +0000 Received: from DM6NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:5:18f:cafe::55) by DM6PR14CA0042.outlook.office365.com (2603:10b6:5:18f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4195.20 via Frontend Transport; Mon, 7 Jun 2021 07:59:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; redhat.com; dkim=none (message not signed) header.d=none;redhat.com; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT048.mail.protection.outlook.com (10.13.173.114) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4195.22 via Frontend Transport; Mon, 7 Jun 2021 07:59:20 +0000 Received: from localhost (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 7 Jun 2021 07:59:19 +0000 From: Alistair Popple To: , CC: , , , , , , , , , , , , , Alistair Popple , "Christoph Hellwig" Subject: [PATCH v10 02/10] mm/swapops: Rework swap entry manipulation code Date: Mon, 7 Jun 2021 17:58:47 +1000 Message-ID: <20210607075855.5084-3-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210607075855.5084-1-apopple@nvidia.com> References: <20210607075855.5084-1-apopple@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3a506ccc-9296-45c1-7e8e-08d9298a2845 X-MS-TrafficTypeDiagnostic: BY5PR12MB4114: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1265; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4LZxzvSvgORhMRMnp0qQgxmh7ioEmAB9moM0ay3MNA246tNRFnbzP006NnQgR6PQY8+CeTDTDGeK6sEgRBs0EJ3l4efhnCN19iGQuUNSazOfNOeAALCUXMLU0rJYfh3VvutgF6k/as+BccRxRenFdWi8ZCAz5GsnKdFMhAz3WwxuCe2T7nn1jROVM+DVM7X+9+eBv1OyIH8X8ygXo0XKO/7PnsTjfjIdBB2c1qR+ZsBWXrx9zwSH5LTTWDNcsgTNkO3pmMNNugpjr/d/LgzuEoduiGbRl9WA0oD4TY7Uz+gq1/2gqyP4iRtjV1S6KiJpyc0GpLPf6X2Aa+KWvWkCVApX0KDgk7AjwlYka7JuEVOSXPGo8kCam7JE29YcdNbG2A1+oVQQWisxVBWT6hs9F8J+86Lu3333dnt8Agms0QzQFsMxPBVbBFUT70kp/UAlCUV3gG3a/lSGVTT2XMZHVEQbQa9NFe9ibNVBTEg/cJ9F1ShGHr9BW5t+P+OX8k7h57VJmiM42KZBaTAI6w+RRuUq0SW2OAuOS7rFUDe8iXPVYOTjCWwJK8PJRzPevTH9cuMAYzvUMbhoLJ4RIrae+UqAzsUTZ+X+qF40jZohHI/0lr/M4uXDOUvl/+2gmUj+r3qA5pnPbt3wiLBvsTXiUrA/6Qn1zYcFVaosBSMI91A= X-Forefront-Antispam-Report: CIP:216.228.112.34;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:mail.nvidia.com;PTR:schybrid03.nvidia.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(39860400002)(376002)(46966006)(36840700001)(36906005)(2906002)(110136005)(5660300002)(54906003)(4326008)(36860700001)(86362001)(2616005)(8676002)(336012)(7416002)(83380400001)(426003)(7636003)(30864003)(16526019)(82740400003)(316002)(6666004)(26005)(8936002)(36756003)(70586007)(478600001)(356005)(82310400003)(186003)(47076005)(70206006)(1076003);DIR:OUT;SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2021 07:59:20.6395 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3a506ccc-9296-45c1-7e8e-08d9298a2845 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a;Ip=[216.228.112.34];Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4114 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1DB8D8019358 X-Stat-Signature: c53yep9yw15zqdc7puh8uy4xfib3dmfn Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b="e8mLsDO/"; dmarc=pass (policy=none) header.from=nvidia.com; spf=none (imf08.hostedemail.com: domain of apopple@nvidia.com has no SPF policy when checking 40.107.96.85) smtp.mailfrom=apopple@nvidia.com X-HE-Tag: 1623052761-236010 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Both migration and device private pages use special swap entries that are manipluated by a range of inline functions. The arguments to these are somewhat inconsitent so rework them to remove flag type arguments and to make the arguments similar for both read and write entry creation. Signed-off-by: Alistair Popple Reviewed-by: Christoph Hellwig Reviewed-by: Jason Gunthorpe Reviewed-by: Ralph Campbell --- include/linux/swapops.h | 56 ++++++++++++++++++++++------------------- mm/debug_vm_pgtable.c | 12 ++++----- mm/hmm.c | 2 +- mm/huge_memory.c | 26 +++++++++++++------ mm/hugetlb.c | 10 +++++--- mm/memory.c | 10 +++++--- mm/migrate.c | 26 ++++++++++++++----- mm/mprotect.c | 10 +++++--- mm/rmap.c | 10 +++++--- 9 files changed, 100 insertions(+), 62 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 139be8235ad2..4dfd807ae52a 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -100,35 +100,35 @@ static inline void *swp_to_radix_entry(swp_entry_t = entry) } =20 #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) -static inline swp_entry_t make_device_private_entry(struct page *page, b= ool write) +static inline swp_entry_t make_readable_device_private_entry(pgoff_t off= set) { - return swp_entry(write ? SWP_DEVICE_WRITE : SWP_DEVICE_READ, - page_to_pfn(page)); + return swp_entry(SWP_DEVICE_READ, offset); } =20 -static inline bool is_device_private_entry(swp_entry_t entry) +static inline swp_entry_t make_writable_device_private_entry(pgoff_t off= set) { - int type =3D swp_type(entry); - return type =3D=3D SWP_DEVICE_READ || type =3D=3D SWP_DEVICE_WRITE; + return swp_entry(SWP_DEVICE_WRITE, offset); } =20 -static inline void make_device_private_entry_read(swp_entry_t *entry) +static inline bool is_device_private_entry(swp_entry_t entry) { - *entry =3D swp_entry(SWP_DEVICE_READ, swp_offset(*entry)); + int type =3D swp_type(entry); + return type =3D=3D SWP_DEVICE_READ || type =3D=3D SWP_DEVICE_WRITE; } =20 -static inline bool is_write_device_private_entry(swp_entry_t entry) +static inline bool is_writable_device_private_entry(swp_entry_t entry) { return unlikely(swp_type(entry) =3D=3D SWP_DEVICE_WRITE); } #else /* CONFIG_DEVICE_PRIVATE */ -static inline swp_entry_t make_device_private_entry(struct page *page, b= ool write) +static inline swp_entry_t make_readable_device_private_entry(pgoff_t off= set) { return swp_entry(0, 0); } =20 -static inline void make_device_private_entry_read(swp_entry_t *entry) +static inline swp_entry_t make_writable_device_private_entry(pgoff_t off= set) { + return swp_entry(0, 0); } =20 static inline bool is_device_private_entry(swp_entry_t entry) @@ -136,35 +136,32 @@ static inline bool is_device_private_entry(swp_entr= y_t entry) return false; } =20 -static inline bool is_write_device_private_entry(swp_entry_t entry) +static inline bool is_writable_device_private_entry(swp_entry_t entry) { return false; } #endif /* CONFIG_DEVICE_PRIVATE */ =20 #ifdef CONFIG_MIGRATION -static inline swp_entry_t make_migration_entry(struct page *page, int wr= ite) -{ - BUG_ON(!PageLocked(compound_head(page))); - - return swp_entry(write ? SWP_MIGRATION_WRITE : SWP_MIGRATION_READ, - page_to_pfn(page)); -} - static inline int is_migration_entry(swp_entry_t entry) { return unlikely(swp_type(entry) =3D=3D SWP_MIGRATION_READ || swp_type(entry) =3D=3D SWP_MIGRATION_WRITE); } =20 -static inline int is_write_migration_entry(swp_entry_t entry) +static inline int is_writable_migration_entry(swp_entry_t entry) { return unlikely(swp_type(entry) =3D=3D SWP_MIGRATION_WRITE); } =20 -static inline void make_migration_entry_read(swp_entry_t *entry) +static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { - *entry =3D swp_entry(SWP_MIGRATION_READ, swp_offset(*entry)); + return swp_entry(SWP_MIGRATION_READ, offset); +} + +static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) +{ + return swp_entry(SWP_MIGRATION_WRITE, offset); } =20 extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, @@ -174,21 +171,28 @@ extern void migration_entry_wait(struct mm_struct *= mm, pmd_t *pmd, extern void migration_entry_wait_huge(struct vm_area_struct *vma, struct mm_struct *mm, pte_t *pte); #else +static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} + +static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) +{ + return swp_entry(0, 0); +} =20 -#define make_migration_entry(page, write) swp_entry(0, 0) static inline int is_migration_entry(swp_entry_t swp) { return 0; } =20 -static inline void make_migration_entry_read(swp_entry_t *entryp) { } static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *p= tep, spinlock_t *ptl) { } static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd= , unsigned long address) { } static inline void migration_entry_wait_huge(struct vm_area_struct *vma, struct mm_struct *mm, pte_t *pte) { } -static inline int is_write_migration_entry(swp_entry_t entry) +static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; } diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index 05efe98a9ac2..1dcc441da377 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -817,17 +817,17 @@ static void __init swap_migration_tests(void) * locked, otherwise it stumbles upon a BUG_ON(). */ __SetPageLocked(page); - swp =3D make_migration_entry(page, 1); + swp =3D make_writable_migration_entry(page_to_pfn(page)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(!is_write_migration_entry(swp)); + WARN_ON(!is_writable_migration_entry(swp)); =20 - make_migration_entry_read(&swp); + swp =3D make_readable_migration_entry(swp_offset(swp)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(is_write_migration_entry(swp)); + WARN_ON(is_writable_migration_entry(swp)); =20 - swp =3D make_migration_entry(page, 0); + swp =3D make_readable_migration_entry(page_to_pfn(page)); WARN_ON(!is_migration_entry(swp)); - WARN_ON(is_write_migration_entry(swp)); + WARN_ON(is_writable_migration_entry(swp)); __ClearPageLocked(page); __free_page(page); } diff --git a/mm/hmm.c b/mm/hmm.c index 3b2dda71d0ed..11df3ca30b82 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -255,7 +255,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, u= nsigned long addr, */ if (hmm_is_device_private_entry(range, entry)) { cpu_flags =3D HMM_PFN_VALID; - if (is_write_device_private_entry(entry)) + if (is_writable_device_private_entry(entry)) cpu_flags |=3D HMM_PFN_WRITE; *hmm_pfn =3D swp_offset(entry) | cpu_flags; return 0; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7137ab31766a..2ec6dab72217 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1050,8 +1050,9 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct = mm_struct *src_mm, swp_entry_t entry =3D pmd_to_swp_entry(pmd); =20 VM_BUG_ON(!is_pmd_migration_entry(pmd)); - if (is_write_migration_entry(entry)) { - make_migration_entry_read(&entry); + if (is_writable_migration_entry(entry)) { + entry =3D make_readable_migration_entry( + swp_offset(entry)); pmd =3D swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*src_pmd)) pmd =3D pmd_swp_mksoft_dirty(pmd); @@ -1819,13 +1820,14 @@ int change_huge_pmd(struct vm_area_struct *vma, p= md_t *pmd, swp_entry_t entry =3D pmd_to_swp_entry(*pmd); =20 VM_BUG_ON(!is_pmd_migration_entry(*pmd)); - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { pmd_t newpmd; /* * A protection check is difficult so * just be safe and disable write */ - make_migration_entry_read(&entry); + entry =3D make_readable_migration_entry( + swp_offset(entry)); newpmd =3D swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd =3D pmd_swp_mksoft_dirty(newpmd); @@ -2103,7 +2105,7 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, =20 entry =3D pmd_to_swp_entry(old_pmd); page =3D pfn_swap_entry_to_page(entry); - write =3D is_write_migration_entry(entry); + write =3D is_writable_migration_entry(entry); young =3D false; soft_dirty =3D pmd_swp_soft_dirty(old_pmd); uffd_wp =3D pmd_swp_uffd_wp(old_pmd); @@ -2135,7 +2137,12 @@ static void __split_huge_pmd_locked(struct vm_area= _struct *vma, pmd_t *pmd, */ if (freeze || pmd_migration) { swp_entry_t swp_entry; - swp_entry =3D make_migration_entry(page + i, write); + if (write) + swp_entry =3D make_writable_migration_entry( + page_to_pfn(page + i)); + else + swp_entry =3D make_readable_migration_entry( + page_to_pfn(page + i)); entry =3D swp_entry_to_pte(swp_entry); if (soft_dirty) entry =3D pte_swp_mksoft_dirty(entry); @@ -3212,7 +3219,10 @@ void set_pmd_migration_entry(struct page_vma_mappe= d_walk *pvmw, pmdval =3D pmdp_invalidate(vma, address, pvmw->pmd); if (pmd_dirty(pmdval)) set_page_dirty(page); - entry =3D make_migration_entry(page, pmd_write(pmdval)); + if (pmd_write(pmdval)) + entry =3D make_writable_migration_entry(page_to_pfn(page)); + else + entry =3D make_readable_migration_entry(page_to_pfn(page)); pmdswp =3D swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp =3D pmd_swp_mksoft_dirty(pmdswp); @@ -3238,7 +3248,7 @@ void remove_migration_pmd(struct page_vma_mapped_wa= lk *pvmw, struct page *new) pmde =3D pmd_mkold(mk_huge_pmd(new, vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde =3D pmd_mksoft_dirty(pmde); - if (is_write_migration_entry(entry)) + if (is_writable_migration_entry(entry)) pmde =3D maybe_pmd_mkwrite(pmde, vma); =20 flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 95918f410c0f..5e6ee9c286c0 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3989,12 +3989,13 @@ int copy_hugetlb_page_range(struct mm_struct *dst= , struct mm_struct *src, is_hugetlb_entry_hwpoisoned(entry))) { swp_entry_t swp_entry =3D pte_to_swp_entry(entry); =20 - if (is_write_migration_entry(swp_entry) && cow) { + if (is_writable_migration_entry(swp_entry) && cow) { /* * COW mappings require pages in both * parent and child to be set to read. */ - make_migration_entry_read(&swp_entry); + swp_entry =3D make_readable_migration_entry( + swp_offset(swp_entry)); entry =3D swp_entry_to_pte(swp_entry); set_huge_swap_pte_at(src, addr, src_pte, entry, sz); @@ -5237,10 +5238,11 @@ unsigned long hugetlb_change_protection(struct vm= _area_struct *vma, if (unlikely(is_hugetlb_entry_migration(pte))) { swp_entry_t entry =3D pte_to_swp_entry(pte); =20 - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { pte_t newpte; =20 - make_migration_entry_read(&entry); + entry =3D make_readable_migration_entry( + swp_offset(entry)); newpte =3D swp_entry_to_pte(entry); set_huge_swap_pte_at(mm, address, ptep, newpte, huge_page_size(h)); diff --git a/mm/memory.c b/mm/memory.c index 1f5c3f6134fb..2fb455c365c2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -734,13 +734,14 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struc= t mm_struct *src_mm, =20 rss[mm_counter(page)]++; =20 - if (is_write_migration_entry(entry) && + if (is_writable_migration_entry(entry) && is_cow_mapping(vm_flags)) { /* * COW mappings require pages in both * parent and child to be set to read. */ - make_migration_entry_read(&entry); + entry =3D make_readable_migration_entry( + swp_offset(entry)); pte =3D swp_entry_to_pte(entry); if (pte_swp_soft_dirty(*src_pte)) pte =3D pte_swp_mksoft_dirty(pte); @@ -771,9 +772,10 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct= mm_struct *src_mm, * when a device driver is involved (you cannot easily * save and restore device driver state). */ - if (is_write_device_private_entry(entry) && + if (is_writable_device_private_entry(entry) && is_cow_mapping(vm_flags)) { - make_device_private_entry_read(&entry); + entry =3D make_readable_device_private_entry( + swp_offset(entry)); pte =3D swp_entry_to_pte(entry); if (pte_swp_uffd_wp(*src_pte)) pte =3D pte_swp_mkuffd_wp(pte); diff --git a/mm/migrate.c b/mm/migrate.c index 749321ae3026..930de919b1f2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -210,13 +210,18 @@ static bool remove_migration_pte(struct page *page,= struct vm_area_struct *vma, * Recheck VMA as permissions can change since migration started */ entry =3D pte_to_swp_entry(*pvmw.pte); - if (is_write_migration_entry(entry)) + if (is_writable_migration_entry(entry)) pte =3D maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) pte =3D pte_mkuffd_wp(pte); =20 if (unlikely(is_device_private_page(new))) { - entry =3D make_device_private_entry(new, pte_write(pte)); + if (pte_write(pte)) + entry =3D make_writable_device_private_entry( + page_to_pfn(new)); + else + entry =3D make_readable_device_private_entry( + page_to_pfn(new)); pte =3D swp_entry_to_pte(entry); if (pte_swp_soft_dirty(*pvmw.pte)) pte =3D pte_swp_mksoft_dirty(pte); @@ -2407,7 +2412,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, =20 mpfn =3D migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE; - if (is_write_device_private_entry(entry)) + if (is_writable_device_private_entry(entry)) mpfn |=3D MIGRATE_PFN_WRITE; } else { if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) @@ -2453,8 +2458,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, ptep_get_and_clear(mm, addr, ptep); =20 /* Setup special migration page table entry */ - entry =3D make_migration_entry(page, mpfn & - MIGRATE_PFN_WRITE); + if (mpfn & MIGRATE_PFN_WRITE) + entry =3D make_writable_migration_entry( + page_to_pfn(page)); + else + entry =3D make_readable_migration_entry( + page_to_pfn(page)); swp_pte =3D swp_entry_to_pte(entry); if (pte_present(pte)) { if (pte_soft_dirty(pte)) @@ -2927,7 +2936,12 @@ static void migrate_vma_insert_page(struct migrate= _vma *migrate, if (is_device_private_page(page)) { swp_entry_t swp_entry; =20 - swp_entry =3D make_device_private_entry(page, vma->vm_flags & VM_WRIT= E); + if (vma->vm_flags & VM_WRITE) + swp_entry =3D make_writable_device_private_entry( + page_to_pfn(page)); + else + swp_entry =3D make_readable_device_private_entry( + page_to_pfn(page)); entry =3D swp_entry_to_pte(swp_entry); } else { /* diff --git a/mm/mprotect.c b/mm/mprotect.c index e7a443157988..ee5961888e70 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -143,23 +143,25 @@ static unsigned long change_pte_range(struct vm_are= a_struct *vma, pmd_t *pmd, swp_entry_t entry =3D pte_to_swp_entry(oldpte); pte_t newpte; =20 - if (is_write_migration_entry(entry)) { + if (is_writable_migration_entry(entry)) { /* * A protection check is difficult so * just be safe and disable write */ - make_migration_entry_read(&entry); + entry =3D make_readable_migration_entry( + swp_offset(entry)); newpte =3D swp_entry_to_pte(entry); if (pte_swp_soft_dirty(oldpte)) newpte =3D pte_swp_mksoft_dirty(newpte); if (pte_swp_uffd_wp(oldpte)) newpte =3D pte_swp_mkuffd_wp(newpte); - } else if (is_write_device_private_entry(entry)) { + } else if (is_writable_device_private_entry(entry)) { /* * We do not preserve soft-dirtiness. See * copy_one_pte() for explanation. */ - make_device_private_entry_read(&entry); + entry =3D make_readable_device_private_entry( + swp_offset(entry)); newpte =3D swp_entry_to_pte(entry); if (pte_swp_uffd_wp(oldpte)) newpte =3D pte_swp_mkuffd_wp(newpte); diff --git a/mm/rmap.c b/mm/rmap.c index 693a610e181d..bc08c4d4b58a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1526,7 +1526,7 @@ static bool try_to_unmap_one(struct page *page, str= uct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry =3D make_migration_entry(page, 0); + entry =3D make_readable_migration_entry(page_to_pfn(page)); swp_pte =3D swp_entry_to_pte(entry); =20 /* @@ -1622,8 +1622,12 @@ static bool try_to_unmap_one(struct page *page, st= ruct vm_area_struct *vma, * pte. do_swap_page() will wait until the migration * pte is removed and then restart fault handling. */ - entry =3D make_migration_entry(subpage, - pte_write(pteval)); + if (pte_write(pteval)) + entry =3D make_writable_migration_entry( + page_to_pfn(subpage)); + else + entry =3D make_readable_migration_entry( + page_to_pfn(subpage)); swp_pte =3D swp_entry_to_pte(entry); if (pte_soft_dirty(pteval)) swp_pte =3D pte_swp_mksoft_dirty(swp_pte); --=20 2.20.1