From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36585C43334 for ; Fri, 1 Jul 2022 03:33:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26DF66B0071; Thu, 30 Jun 2022 23:33:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 244DA6B0073; Thu, 30 Jun 2022 23:33:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10CB86B0074; Thu, 30 Jun 2022 23:33:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0292B6B0071 for ; Thu, 30 Jun 2022 23:33:02 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id C15C180BDF for ; Fri, 1 Jul 2022 03:33:02 +0000 (UTC) X-FDA: 79637109804.09.BE7B1AA Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf22.hostedemail.com (Postfix) with ESMTP id 7D0CDC003D for ; Fri, 1 Jul 2022 03:33:01 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id x20so1162884plx.6 for ; Thu, 30 Jun 2022 20:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=0RbhlQu5Lj33FkKpTCALwxiT19xHuxjedeqLg966TRQ=; b=8MrRY8duJ7qkg1Etwk0C2j1flrSV8JYBfbsmEQJVtflyYRB752y9M2T1zr/iwyN3y6 qxweo8PXtcRJA1SaMPA5mqRKlQOeit0nWhA8KqnWVZXU2F2XnNgpnAzkotLTo+3tex/P L4yeVoAtgdQte7oC6+Kv8YrZVGlItj7tXs2qRJFBe6yORJVMkIPr5nMSG+o9U6SzcL2g 1PP7RDh0aNPCh5v4BvLB57F7uvNo4LLV/v1PocTM5bRFLcByLLus53P9MCcX7fJvWKmg Ke5S1AbWlSiNlgQ/e72cUmW9BkyCGg2HhseGkh5Lx7NtclInWKrqsPLq1WDQMWgkD4ZN yNvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=0RbhlQu5Lj33FkKpTCALwxiT19xHuxjedeqLg966TRQ=; b=5QUC9jkI1mXtga+dW0xHVQ5ATK0oo+d/Zc6JcSu/63Kel3G98SVkF25ZpfnPOu1v07 bgs/VZC7hIrkVjFQ0rS34uYa5z2uV3C2Kju1YZ7af2tOILVxjSvbU5zgwwlyVX45t6yY qfo7xzF1ZE+0q8dTI4sB+eaiZMXYkX3QNHNhyJAdFx3votcSK3PsAvXXz1P16fehtU0n mhWIRbJVZ8exMOzpqO7d2opI35P7A9s/kDSins93s0vpsNGBuXDZRyLrU4ZDikgvoKu1 gUIU99aZGN1YdbcayxInnQrgJuMk7Xdzu7szaCAqv0VIdS9m1uOwqMGOT13jkhpZwgGb Ibag== X-Gm-Message-State: AJIora+AQSHoSUnvkMw0n3ezTbcg+agmIW7G0QKou/zTLxdBI40U2Tfv /ZfnPg00aNquSFDMauEVk2bbMQ== X-Google-Smtp-Source: AGRyM1uVRCG88+A5TaUeokbKzEosxObBg5qSQ9Z1B0Iar7+QnbXU+ch3RBvpx8BytOT2FqON1yvh1g== X-Received: by 2002:a17:90b:4c05:b0:1ef:203e:6da0 with SMTP id na5-20020a17090b4c0500b001ef203e6da0mr12556119pjb.227.1656646380164; Thu, 30 Jun 2022 20:33:00 -0700 (PDT) Received: from smtpclient.apple ([139.177.225.245]) by smtp.gmail.com with ESMTPSA id g8-20020a62e308000000b005251ff30dccsm14321796pfh.160.2022.06.30.20.32.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Jun 2022 20:32:59 -0700 (PDT) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.100.31\)) Subject: Re: [RFC PATCH 04/26] hugetlb: make huge_pte_lockptr take an explicit shift argument. From: Muchun Song In-Reply-To: Date: Fri, 1 Jul 2022 11:32:51 +0800 Cc: Mike Kravetz , Peter Xu , David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , Jue Wang , Manish Mishra , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, zhengqi.arch@bytedance.com Content-Transfer-Encoding: quoted-printable Message-Id: References: <20220624173656.2033256-1-jthoughton@google.com> <20220624173656.2033256-5-jthoughton@google.com> To: James Houghton X-Mailer: Apple Mail (2.3696.100.31) ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=8MrRY8du; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf22.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656646382; a=rsa-sha256; cv=none; b=44Z5FJl++VXnXyWxaCz4Id0eoLChmLERhmnuwihbs3BXzLOdyHXevt7e2FhoeCTme4N/Qs 45oZdR8s5uYj+5g+8/2VmaHqwCvKd36gDbmgg4tAktM0heVeNjKB69DKQj8wkoyIPTxksm reRqmIFlhOPPmTi3GjrE8tSu/lNRQKw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656646382; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0RbhlQu5Lj33FkKpTCALwxiT19xHuxjedeqLg966TRQ=; b=xz7IyDqnXGBJ2PFh9rAcpGE2BkP8cK5RmV+EuZpkSyV34I9Vrg4ElSK+bug84DWBtDbXw4 hcug2E3mfKHG4hf/Ar1+rlFdF2/odhVabiV8n+rRHBRzlS5e3KDOE0UuFFHEibOrtSx8yK 3H6ow5LvM1tsMs0/YxGQfCH0IxNBLX8= X-Stat-Signature: w58q9d76fpkwwgwa9h4o7qsw5mmx6375 X-Rspamd-Queue-Id: 7D0CDC003D Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=8MrRY8du; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf22.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1656646381-743184 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Jul 1, 2022, at 00:23, James Houghton = wrote: >=20 > On Thu, Jun 30, 2022 at 2:35 AM Muchun Song = wrote: >>=20 >> On Wed, Jun 29, 2022 at 03:24:45PM -0700, Mike Kravetz wrote: >>> On 06/29/22 14:39, James Houghton wrote: >>>> On Wed, Jun 29, 2022 at 2:04 PM Mike Kravetz = wrote: >>>>>=20 >>>>> On 06/29/22 14:09, Muchun Song wrote: >>>>>> On Mon, Jun 27, 2022 at 01:51:53PM -0700, Mike Kravetz wrote: >>>>>>> On 06/24/22 17:36, James Houghton wrote: >>>>>>>> This is needed to handle PTL locking with high-granularity = mapping. We >>>>>>>> won't always be using the PMD-level PTL even if we're using the = 2M >>>>>>>> hugepage hstate. It's possible that we're dealing with 4K PTEs, = in which >>>>>>>> case, we need to lock the PTL for the 4K PTE. >>>>>>>=20 >>>>>>> I'm not really sure why this would be required. >>>>>>> Why not use the PMD level lock for 4K PTEs? Seems that would = scale better >>>>>>> with less contention than using the more coarse mm lock. >>>>>>>=20 >>>>>>=20 >>>>>> Your words make me thing of another question unrelated to this = patch. >>>>>> We __know__ that arm64 supports continues PTE HugeTLB. = huge_pte_lockptr() >>>>>> did not consider this case, in this case, those HugeTLB pages are = contended >>>>>> with mm lock. Seems we should optimize this case. Something like: >>>>>>=20 >>>>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h >>>>>> index 0d790fa3f297..68a1e071bfc0 100644 >>>>>> --- a/include/linux/hugetlb.h >>>>>> +++ b/include/linux/hugetlb.h >>>>>> @@ -893,7 +893,7 @@ static inline gfp_t = htlb_modify_alloc_mask(struct hstate *h, gfp_t gfp_mask) >>>>>> static inline spinlock_t *huge_pte_lockptr(struct hstate *h, >>>>>> struct mm_struct *mm, pte_t *pte) >>>>>> { >>>>>> - if (huge_page_size(h) =3D=3D PMD_SIZE) >>>>>> + if (huge_page_size(h) <=3D PMD_SIZE) >>>>>> return pmd_lockptr(mm, (pmd_t *) pte); >>>>>> VM_BUG_ON(huge_page_size(h) =3D=3D PAGE_SIZE); >>>>>> return &mm->page_table_lock; >>>>>>=20 >>>>>> I did not check if elsewhere needs to be changed as well. Just a = primary >>>>>> thought. >>>>=20 >>>> I'm not sure if this works. If hugetlb_pte_size(hpte) is PAGE_SIZE, >>>> then `hpte.ptep` will be a pte_t, not a pmd_t -- I assume that = breaks >>>> things. So I think, when doing a HugeTLB PT walk down to PAGE_SIZE, = we >>>> need to separately keep track of the location of the PMD so that we >>>> can use it to get the PMD lock. >>>=20 >>> I assume Muchun was talking about changing this in current code = (before >>> your changes) where huge_page_size(h) can not be PAGE_SIZE. >>>=20 >>=20 >> Yes, that's what I meant. >=20 > Right -- but I think my point still stands. If `huge_page_size(h)` is > CONT_PTE_SIZE, then the `pte_t *` passed to `huge_pte_lockptr` will > *actually* point to a `pte_t` and not a `pmd_t` (I'm pretty sure the Right. It is a pte in this case. > distinction is important). So it seems like we need to separately keep > track of the real pmd_t that is being used in the CONT_PTE_SIZE case If we want to find pmd_t from pte_t, I think we can introduce a new = field in struct page just like the thread [1] does. [1] = https://lore.kernel.org/lkml/20211110105428.32458-7-zhengqi.arch@bytedance= .com/ > (and therefore, when considering HGM, the PAGE_SIZE case). >=20 > However, we *can* make this optimization for CONT_PMD_SIZE (maybe this > is what you originally meant, Muchun?), so instead of > `huge_page_size(h) =3D=3D PMD_SIZE`, we could do `huge_page_size(h) >=3D= > PMD_SIZE && huge_page_size(h) < PUD_SIZE`. Right. It is a good start to optimize CONT_PMD_SIZE case. Thanks. >=20 >>=20 >> Thanks.