From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 74DDCC433E2 for ; Mon, 7 Sep 2020 16:20:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E431420757 for ; Mon, 7 Sep 2020 16:20:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="U4Z4qxvi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E431420757 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F5366B0037; Mon, 7 Sep 2020 12:20:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A5736B005A; Mon, 7 Sep 2020 12:20:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16ECF6B005C; Mon, 7 Sep 2020 12:20:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0121.hostedemail.com [216.40.44.121]) by kanga.kvack.org (Postfix) with ESMTP id F34E16B0037 for ; Mon, 7 Sep 2020 12:20:21 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AA7833626 for ; Mon, 7 Sep 2020 16:20:21 +0000 (UTC) X-FDA: 77236777842.09.drop69_4501bb4270cd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 82455180AD815 for ; Mon, 7 Sep 2020 16:20:21 +0000 (UTC) X-HE-Tag: drop69_4501bb4270cd X-Filterd-Recvd-Size: 6305 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Sep 2020 16:20:20 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 07 Sep 2020 09:19:29 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 07 Sep 2020 09:20:19 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 07 Sep 2020 09:20:19 -0700 Received: from [10.2.173.224] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 7 Sep 2020 16:20:14 +0000 From: Zi Yan To: "Kirill A. Shutemov" CC: , Roman Gushchin , Rik van Riel , "Kirill A . Shutemov" , Matthew Wilcox , Shakeel Butt , Yang Shi , David Nellans , Subject: Re: [RFC PATCH 01/16] mm: add pagechain container for storing multiple pages. Date: Mon, 7 Sep 2020 11:11:05 -0400 X-Mailer: MailMate (1.13.1r5705) Message-ID: <50FA95D1-9222-48C0-9223-C1267E8C7A4A@nvidia.com> In-Reply-To: <20200907122228.4zlyfysdul3s62me@box> References: <20200902180628.4052244-1-zi.yan@sent.com> <20200902180628.4052244-2-zi.yan@sent.com> <20200907122228.4zlyfysdul3s62me@box> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: multipart/signed; boundary="=_MailMate_C524708B-26F9-4F4E-8D4C-397110CA1F43_="; micalg=pgp-sha512; protocol="application/pgp-signature" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1599495569; bh=XnTErfZJmzsbuFLxuYCSxEnom9P/2j+krPxmhkHI9cY=; h=X-PGP-Universal:From:To:CC:Subject:Date:X-Mailer:Message-ID: In-Reply-To:References:MIME-Version:X-Originating-IP: X-ClientProxiedBy:Content-Type; b=U4Z4qxviGFX+btp8OavPkBkMQmHxIBgEyc8CDZ8rE1f+FgYF57mLJMD+CwwzY+fpU GoIMxbRjjG5ehaqoXwcXSkP9UeAJ1qeOdpLssGH+U7brJvKD+GPtcqRXN/ldARgIgb 6EFJgUNo+YeqYU5HNOt2pEdhBzoX3Do88Fut1CH8xvOnLGmfCfi928h1oAewYHNHR3 3noYaMgLguUaQrOvQCnnmhHeyTc9Bmc4aYRpTAGOhQsXcL52gLG2JYwll0fPq65iRB 7R0+FUoMkkUQ7Zy90Lb5kVo1wvyO+fzV17aC49OCM1CPyf5oH9LpqDptBYA4AwWBKC Jw5E8q/1Qs5HA== X-Rspamd-Queue-Id: 82455180AD815 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --=_MailMate_C524708B-26F9-4F4E-8D4C-397110CA1F43_= Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On 7 Sep 2020, at 8:22, Kirill A. Shutemov wrote: > On Wed, Sep 02, 2020 at 02:06:13PM -0400, Zi Yan wrote: >> From: Zi Yan >> >> When depositing page table pages for 1GB THPs, we need 512 PTE pages += >> 1 PMD page. Instead of counting and depositing 513 pages, we can use t= he >> PMD page as a leader page and chain the rest 512 PTE pages with ->lru.= >> This, however, prevents us depositing PMD pages with ->lru, which is >> currently used by depositing PTE pages for 2MB THPs. So add a new >> pagechain container for PMD pages. >> >> Signed-off-by: Zi Yan > > Just deposit it to a linked list in the mm_struct as we do for PMD if > split ptl disabled. > Thank you for checking the patches. Since we don=E2=80=99t have PUD split= lock yet, I store the PMD page table pages in a newly added linked list head in mm_struct like you suggested above. I was too vague about my pagechain design for depositing page table pages= for PUD THPs. Sorry about the confusion. Let me clarify why I am doing this pagechain here too. I am sure there would be some other designs and I am happy to change my code. In my design, I did not store all page table pages in a single list. I first deposit 512 PTE pages in one PMD page table page=E2=80=99s pmd_hu= ge_pte using pgtable_trans_huge_depsit(), then deposit the PMD page to a newly added linked list in mm_struct. Since pmd_huge_pte shares space with half of lru in struct page, we cannot use lru to link all PMD pages together. As a result, I added pagechain. Also in this way, we can avoid these things: 1. when we withdraw the PMD page during PUD THP split, we don=E2=80=99t n= eed to withdraw 513 page, set up one PMD page, then, deposit 512 PTE pages in that PMD page. 2. we don=E2=80=99t mix PMD page table pages and PTE page table pages in = a single list, since they are initialized in different ways. Otherwise, we need to maintain a subtle rule in the single page table page list that in ever= y 513 pages, first one is PMD page table page and the rest are PTE page table pages. As I am typing, I also realize that my current design does not work when PMD split lock is disabled, so I will fix it. I would store PMD page= s and PTE pages in two separate lists in mm_struct. Any comments? =E2=80=94 Best Regards, Yan Zi --=_MailMate_C524708B-26F9-4F4E-8D4C-397110CA1F43_= Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQJDBAEBCgAtFiEEh7yFAW3gwjwQ4C9anbJR82th+ooFAl9WTYkPHHppeUBudmlk aWEuY29tAAoJEJ2yUfNrYfqKaqQP/imSOWGHuhcGwPnoJnxwJjrGbjRZgQhosHFn GL2NrMdHoCgha78i3hItoR9KE4JvX6U9BmsR45jeJXqFohg+F0WeC1UzbsVwllBL DbmM9P9w1QDMTDjrH8YyiSMgXM5V/r0XP/dSQjbV9vPZ57BobkYejOuKfphZ/WWz +p4kzkJWkidYZH+WiiZ1JCmFwrQyDNjA7QRQPoC74Dm6S1KTDbJnajv85e2TQ7Yf emBemN1xMXypJGLyJ1PgnGrhhzmqWFXRjK6pmuLlqA6iuFwJp2oLE62RI4lTS9ZC 8FVq0Ki5Zkgm8wWsMBfywUGGffLY5c1VnIGtcLrz1/XrFlsbFdBr5mxkegwySmQN AiCI9ggPtzC6JAvmicnl9u1Uf1XeS90qJLLhE+qqO0odva6TVsFzvssssTd7YfRu ifjitYOk4pt92YuemT3cNdG6Z7x5iHwd6hNLbZ9NP+BQ+/djUeXL/jVE1GUx6Kos mzqEBsENb2PxK6+Wakz3gb6kcEcSgqt6dwRSuzmGfCeQVXDqHLUHPS80wlcCC+XX 961AY56aziAImPO6zWcvqlai2REGFhYSNuaTZfUyoGVOuyrszZrtVRSgd26NJX/W K+w9rhNn1SnknzthASwfXet4UfB+XcS0ZuIj7s+7s5RUKD0nIH6DwYTNVfWBIGg9 av72XkIo =Wyrp -----END PGP SIGNATURE----- --=_MailMate_C524708B-26F9-4F4E-8D4C-397110CA1F43_=--