From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18E69C43334 for ; Sat, 9 Jul 2022 21:55:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44D3E6B0071; Sat, 9 Jul 2022 17:55:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FAC76B0073; Sat, 9 Jul 2022 17:55:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C2636B0074; Sat, 9 Jul 2022 17:55:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 196E76B0071 for ; Sat, 9 Jul 2022 17:55:22 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id DC58A20293 for ; Sat, 9 Jul 2022 21:55:21 +0000 (UTC) X-FDA: 79668918042.04.7B26F0D Received: from mail-vk1-f169.google.com (mail-vk1-f169.google.com [209.85.221.169]) by imf05.hostedemail.com (Postfix) with ESMTP id 8FD7C10002E for ; Sat, 9 Jul 2022 21:55:21 +0000 (UTC) Received: by mail-vk1-f169.google.com with SMTP id 15so912020vko.13 for ; Sat, 09 Jul 2022 14:55:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=h5vX9g3QlJXGDvHYS9ZplleP1BmnBHlgaCZWjSk2y80=; b=I3VpOlqiRPKDGKNIAZvPTv1BhrbcMCbJMGu/BSBubGZ7OWBipS7uQQULi/6rLuMiY0 kIYZnqQPZvM2euzXamEg/MOPs5zdYr4o7YVYXV6capKSGkBGQ34FFT32B4XPjm9HiE+4 SkOG9y4RIkg0q67zlLaCSUSV2R/skGaMh9Ipa26wqRCIH+yCv0Nz72xfzR6EEqHYMKcU ZkcmMIkbe2nPdk46DdgaxEsyrQpSC2S1p/9kRyYF/eTG98PtB0xtJGnjzW/kEJT/MM6F fPNMJ9T0BiAXg/H/witSVFtf9GOhrkLJMj4vvMXkHIxFV9KnoxIYFnUICqkro0mFFiKF OxVw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=h5vX9g3QlJXGDvHYS9ZplleP1BmnBHlgaCZWjSk2y80=; b=SfqWKGm4iTBCxoYBY3BLEsJXm5u2+l6NUOQgNKrV7BUymb4zvc+QcWDLAaeZzG5ioT nF4OymP3iUCWMLoYgTXIqRNAbf3ETI0WtNPE+oqLTN5rGXgATyo1qqb67uO/ZKI8duyL eYSZdSPA59XoyFwlmn7QSwzqNolO129tv2kJY3EPTmwUHpClmzlnbAPNpS+HPZmkgOr3 qVAiQmlBbkv9LjOFvGhDpCwRRT+2prFMg5+6bbHkoR3o9X7EqEkMSoB/U/lCD+C/3U3N 7bDdmwLMe/u9Ay+ppik25HjsI16MoG3i8wvqcBYh+bVfGNGtm1nH4vYblM33CKYfy2di 1axw== X-Gm-Message-State: AJIora8StuRhrnXPngnNwrUxd2WFo1MI6qk17fLNrbES//mvmC7izbHA 5/pOPhlN3DcsAIrrHljy94OacOlrs6EYFLge2XqOpQ== X-Google-Smtp-Source: AGRyM1sI66wHiZCEyb+gRAA+zSgCNS3M8HIwI0tET8FnKNuaoY1e9NPUBeEdfitzKvMpplV9ZpK6y2IjNIg8WufsoJA= X-Received: by 2002:a1f:2a86:0:b0:370:8ff3:d5f with SMTP id q128-20020a1f2a86000000b003708ff30d5fmr4049359vkq.35.1657403720760; Sat, 09 Jul 2022 14:55:20 -0700 (PDT) MIME-Version: 1.0 References: <20220624173656.2033256-1-jthoughton@google.com> <20220624173656.2033256-11-jthoughton@google.com> In-Reply-To: From: Mina Almasry Date: Sat, 9 Jul 2022 14:55:09 -0700 Message-ID: Subject: Re: [RFC PATCH 10/26] hugetlb: add for_each_hgm_shift To: James Houghton Cc: Mike Kravetz , Muchun Song , Peter Xu , David Hildenbrand , David Rientjes , Axel Rasmussen , Jue Wang , Manish Mishra , "Dr . David Alan Gilbert" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657403721; a=rsa-sha256; cv=none; b=Mgo6DdROMtVLym681HXdm71gHfa5rbVWS7iJHqWAYUMMhecqR/js8nuQEyWdkl2v3fqDhn lCNKnx2R5JYDWCTIv71YTsHNOJJJvnq/u4X4qqQhbDH7CooNBLfXEY2TvH+t3CJa7xZJOu 2SNDhWqt2g41NmRyhKDPFe4etQyWHLI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=I3VpOlqi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of almasrymina@google.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=almasrymina@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657403721; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=h5vX9g3QlJXGDvHYS9ZplleP1BmnBHlgaCZWjSk2y80=; b=wMcOFQjhLPzL9A9h3/M4foMeMkvVeRThvjJHF0ApDk1FLchBbG6lDQhCQ0b9CWxgrD3Tp8 YCXTz1GFMnDIHcOkgugFStxH+Ee3JoWyippncowwhUT9KIvvRQtrRrXrG1WbsAt4gopZAq W78g94TX1vFLv9NCNcojdJxuRMy7rkA= X-Stat-Signature: 9q3gr86ftm74o5e1emqdzzem7xn7apqu Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=I3VpOlqi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of almasrymina@google.com designates 209.85.221.169 as permitted sender) smtp.mailfrom=almasrymina@google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8FD7C10002E X-Rspam-User: X-HE-Tag: 1657403721-205421 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 8, 2022 at 8:52 AM James Houghton wrote: > > On Tue, Jun 28, 2022 at 2:58 PM Mina Almasry wrote: > > > > On Fri, Jun 24, 2022 at 10:37 AM James Houghton wrote: > > > > > > This is a helper macro to loop through all the usable page sizes for a > > > high-granularity-enabled HugeTLB VMA. Given the VMA's hstate, it will > > > loop, in descending order, through the page sizes that HugeTLB supports > > > for this architecture; it always includes PAGE_SIZE. > > > > > > Signed-off-by: James Houghton > > > --- > > > mm/hugetlb.c | 10 ++++++++++ > > > 1 file changed, 10 insertions(+) > > > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > > index 8b10b941458d..557b0afdb503 100644 > > > --- a/mm/hugetlb.c > > > +++ b/mm/hugetlb.c > > > @@ -6989,6 +6989,16 @@ bool hugetlb_hgm_enabled(struct vm_area_struct *vma) > > > /* All shared VMAs have HGM enabled. */ > > > return vma->vm_flags & VM_SHARED; > > > } > > > +static unsigned int __shift_for_hstate(struct hstate *h) > > > +{ > > > + if (h >= &hstates[hugetlb_max_hstate]) > > > + return PAGE_SHIFT; > > > > h > &hstates[hugetlb_max_hstate] means that h is out of bounds, no? am > > I missing something here? > > Yeah, it goes out of bounds intentionally. Maybe I should have called > this out. We need for_each_hgm_shift to include PAGE_SHIFT, and there > is no hstate for it. So to handle it, we iterate past the end of the > hstate array, and when we are past the end, we return PAGE_SHIFT and > stop iterating further. This is admittedly kind of gross; if you have > other suggestions for a way to get a clean `for_each_hgm_shift` macro > like this, I'm all ears. :) > > > > > So is this intending to do: > > > > if (h == hstates[hugetlb_max_hstate] > > return PAGE_SHIFT; > > > > ? If so, could we write it as so? > > Yeah, this works. I'll write it this way instead. If that condition is > true, `h` is out of bounds (`hugetlb_max_hstate` is past the end, not > the index for the final element). I guess `hugetlb_max_hstate` is a > bit of a misnomer. > > > > > I'm also wondering why __shift_for_hstate(hstate[hugetlb_max_hstate]) > > == PAGE_SHIFT? Isn't the last hstate the smallest hstate which should > > be 2MB on x86? Shouldn't this return PMD_SHIFT in that case? > > `huge_page_shift(hstate[hugetlb_max_hstate-1])` is PMD_SHIFT on x86. > Actually reading `hstate[hugetlb_max_hstate]` would be bad, which is > why `__shift_for_hstate` exists: to return PAGE_SIZE when we would > otherwise attempt to compute > `huge_page_shift(hstate[hugetlb_max_hstate])`. > > > > > > + return huge_page_shift(h); > > > +} > > > +#define for_each_hgm_shift(hstate, tmp_h, shift) \ > > > + for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \ > > > + (tmp_h) <= &hstates[hugetlb_max_hstate]; \ > > Note the <= here. If we wanted to always remain inbounds here, we'd > want < instead. But we don't have an hstate for PAGE_SIZE. > I see, thanks for the explanation. I can see 2 options here to make the code more understandable: option (a), don't go past the array. I.e. for_each_hgm_shift() will loop over all the hugetlb-supported shifts on this arch, and the calling code falls back to PAGE_SHIFT if the hugetlb page shifts don't work for it. I admit that could lead to code dup in the calling code, but I have not gotten to the patch that calls this yet. option (b), simply add a comment and/or make it more obvious that you're intentionally going out of bounds, and you want to loop over PAGE_SHIFT at the end. Something like: + /* Returns huge_page_shift(h) if h is a pointer to an hstate in hstates[] array, PAGE_SIZE otherwise. */ +static unsigned int __shift_for_hstate(struct hstate *h) +{ + if (h < &hstates[0] || h > &hstates[hugetlb_max_hstate - 1]) + return PAGE_SHIFT; + return huge_page_shift(h); +} + + /* Loops over all the HGM shifts supported on this arch, from the largest shift possible down to PAGE_SHIFT inclusive. */ +#define for_each_hgm_shift(hstate, tmp_h, shift) \ + for ((tmp_h) = hstate; (shift) = __shift_for_hstate(tmp_h), \ + (tmp_h) <= &hstates[hugetlb_max_hstate]; \ + (tmp_h)++) #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */ > > > + (tmp_h)++) > > > #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */ > > > > > > /* > > > -- > > > 2.37.0.rc0.161.g10f37bed90-goog > > >