From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30D6EC433E9 for ; Thu, 4 Mar 2021 23:54:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF20864FFF for ; Thu, 4 Mar 2021 23:54:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AF20864FFF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3D7676B0006; Thu, 4 Mar 2021 18:54:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AEEF6B0007; Thu, 4 Mar 2021 18:54:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 276296B0008; Thu, 4 Mar 2021 18:54:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 083C46B0006 for ; Thu, 4 Mar 2021 18:54:57 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AB267A759 for ; Thu, 4 Mar 2021 23:54:56 +0000 (UTC) X-FDA: 77883849792.28.F5B3418 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf08.hostedemail.com (Postfix) with ESMTP id 2E40D80192E2 for ; Thu, 4 Mar 2021 23:54:52 +0000 (UTC) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 04 Mar 2021 15:54:54 -0800 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 4 Mar 2021 23:54:54 +0000 Received: from nvdebian.localnet (172.20.145.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 4 Mar 2021 23:54:51 +0000 From: Alistair Popple To: Zi Yan CC: , , , , , , , , , , , , Subject: Re: [PATCH v3 4/8] mm/rmap: Split migration into its own function Date: Fri, 5 Mar 2021 10:54:48 +1100 Message-ID: <84997524.IMQpRet0Aq@nvdebian> In-Reply-To: References: <20210226071832.31547-1-apopple@nvidia.com> <20210226071832.31547-5-apopple@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="UTF-8" X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To DRHQMAIL107.nvidia.com (10.27.9.16) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1614902094; bh=F3stMHCU+5ii6x1iy3omj83ayupI0P+Mr9dS1o5FIt0=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Transfer-Encoding:Content-Type: X-Originating-IP:X-ClientProxiedBy; b=NpU2unlnheR6/rs4Zkq9TpQwW8Ut3zxi8Pn9daVGKPN2Zf3wm0lvYJ/fXsPnTrJ2C 8r6FecCuivTyRTi4tEnX/gdG9VaTETErNKJhNRJW79HRPU3l70FQFYUPIkhEgdY7tX YIlY/ILaZKek6pTrRHqTdQ0bbLwCKcvyZQIUXQp259KaTmOAv19PPuEro7v397+5Ru zN6MpkzlSUiIVTJVHly9G9jlXozppR3Xo1drI0n42SmVmEVacBDR8DiJef1PPXPiVo gD39vcJO1NY/MhM/r142FNOwKGLtl9ekatblvwVaSTLNgcdn4994m2D59bfjPvLnRH 5WIgHVadyysng== X-Stat-Signature: 6bm6d7ggc8kh6s386r3f3cqee3zr87hy X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 2E40D80192E2 Received-SPF: none (nvidia.com>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=hqnvemgate25.nvidia.com; client-ip=216.228.121.64 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614902092-729374 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wednesday, 3 March 2021 9:08:15 AM AEDT Zi Yan wrote: > On 26 Feb 2021, at 2:18, Alistair Popple wrote: > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > > index 7f1ee411bd7b..77fa17de51d7 100644 > > --- a/include/linux/rmap.h > > +++ b/include/linux/rmap.h > > @@ -86,8 +86,6 @@ struct anon_vma_chain { > > }; > > > > enum ttu_flags { > > - TTU_MIGRATION =3D 0x1, /* migration mode */ > > - > > TTU_SPLIT_HUGE_PMD =3D 0x4, /* split huge PMD if any */ >=20 > It implies freeze in try_to_migrate() and no freeze in try_to_unmap(). I= =20 think > we need some comments here, above try_to_migrate(), and above try_to_unma= p() > to clarify the implication. Sure. This confused me for a bit and I was initially tempted to leave=20 TTU_SPLIT_FREEZE as a separate mode flag but looking at what freeze actuall= y=20 does it made sense to remove it because try_to_migrate() is for installing= =20 migration entries (which is what freeze does) and try_to_unmap() just unmap= s.=20 So I'll add some comments to that effect. =20 > > TTU_IGNORE_MLOCK =3D 0x8, /* ignore mlock */ > > TTU_IGNORE_HWPOISON =3D 0x20, /* corrupted page is recoverable */ > > @@ -96,7 +94,6 @@ enum ttu_flags { > > * do a final flush if necessary */ > > TTU_RMAP_LOCKED =3D 0x80, /* do not grab rmap lock: > > * caller holds it */ > > - TTU_SPLIT_FREEZE =3D 0x100, /* freeze pte under splitting thp */ > > }; > > > > #ifdef CONFIG_MMU > > @@ -193,6 +190,7 @@ static inline void page_dup_rmap(struct page *page,= =20 bool compound) > > int page_referenced(struct page *, int is_locked, > > struct mem_cgroup *memcg, unsigned long *vm_flags); > > > > +bool try_to_migrate(struct page *page, enum ttu_flags flags); > > bool try_to_unmap(struct page *, enum ttu_flags flags); > > > > /* Avoid racy checks */ > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index d00b93dc2d9e..357052a4567b 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -2351,16 +2351,16 @@ void vma_adjust_trans_huge(struct vm_area_struc= t=20 *vma, > > > > static void unmap_page(struct page *page) > > { > > - enum ttu_flags ttu_flags =3D TTU_IGNORE_MLOCK | > > - TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; > > + enum ttu_flags ttu_flags =3D TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; > > bool unmap_success; > > > > VM_BUG_ON_PAGE(!PageHead(page), page); > > > > if (PageAnon(page)) > > - ttu_flags |=3D TTU_SPLIT_FREEZE; > > - > > - unmap_success =3D try_to_unmap(page, ttu_flags); > > + unmap_success =3D try_to_migrate(page, ttu_flags); > > + else > > + unmap_success =3D try_to_unmap(page, ttu_flags | > > + TTU_IGNORE_MLOCK); >=20 > I think we need a comment here about why anonymous pages need=20 try_to_migrate() > and others need try_to_unmap(). Historically this comes from baa355fd3314 ("thp: file pages support for=20 split_huge_page()") which says: "We don't setup migration entries. Just unmap pages. It helps handling case= s=20 when i_size is in the middle of the page: no need handle unmap pages beyond= =20 i_size manually." But I'll add a comment here, thanks. - Alistair > Thanks. >=20 > =E2=80=94 > Best Regards, > Yan Zi