From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C09F9C433FE for ; Wed, 9 Dec 2020 09:25:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0834D23B77 for ; Wed, 9 Dec 2020 09:25:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0834D23B77 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0931C6B00AD; Wed, 9 Dec 2020 04:25:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 01B1B6B00AE; Wed, 9 Dec 2020 04:25:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4C306B00AF; Wed, 9 Dec 2020 04:25:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id CE2B76B00AD for ; Wed, 9 Dec 2020 04:25:53 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 9005310F58 for ; Wed, 9 Dec 2020 09:25:53 +0000 (UTC) X-FDA: 77573211786.02.name51_1612386273ee Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 70DFC10097AA0 for ; Wed, 9 Dec 2020 09:25:53 +0000 (UTC) X-HE-Tag: name51_1612386273ee X-Filterd-Recvd-Size: 8552 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Wed, 9 Dec 2020 09:25:52 +0000 (UTC) Received: by mail-pg1-f193.google.com with SMTP id o4so843038pgj.0 for ; Wed, 09 Dec 2020 01:25:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=dXhJecn2ypVF83qY85JZPNhQKUDimjCGcH0BANuxqD8=; b=btZEszo399nkRw0hQGx4M+kOmKh7hLcMACNMz9aihZjqLTbhaJ1SiNfiWEnnRwziq6 LQAMzNVQWBRegOB1UJ38wyODwmjFmq97Xg702dKnZ1hxSMRRvAJScqM314jD3c9D3+A5 28jwacShlzrw2crSNaf5KeYDHR1pph6kXdGCfAA/G24sWvdNLBGV5PLKcpqrqa9W1D2I vLrwrLcE076E+cvCo1ZVvPycwMbJk+oupiVfBTocPAmtsSpj/oClwnPkYodzcfthEST/ qrfTPrSS6vkn4QNC0Fj9qlOHSWqnfjoKqu06jzvuSnNtE+3hhWXdk/3tjV28anSJqhGK DnGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=dXhJecn2ypVF83qY85JZPNhQKUDimjCGcH0BANuxqD8=; b=gaAOTTR/KxfLKWRkFqbqgNY72iKiWSkWLFop2be2AiJvIUQqrxehT9DNIO1pXIrB+W TDkWTQD43j7KUMMiDst4fBWrzA768/eq51VWU+hJOm5/SeTQtxw4ehy3WD+z3eX1JvPL gK7DEBU/KS8qO0CLFURbSYSjeC4D3Jp932bkPNXKekQldjrgFCBFoLbyyDjcaMP47dQg Cz9J3RXFYYm+SkpA/D+ckqWQCGTVAd+4ZG6jO6sL4Pgit3ZTwzyu4b6m2MU2y6tYBGJa hYssjHR188HN8Q7wVo3pZpdZwSGHPqd8Hzk23CJkuIDPedVHE/A511YIGS0OxmXiZGI6 a9Jg== X-Gm-Message-State: AOAM531BeipDkotnUw8s1NY9HxhgidwGnXCSGXE6gNy0S4Z+aEENNfiv 8EUD4vtK7iw3LkdXisDmLvuK8ycXXhUDtB0EpC1Vtw== X-Google-Smtp-Source: ABdhPJwFLbBXb64wStSXcxtAKbqPsWO/Ku4ocEbgmI3UBNMsXCLDqKy1pLDkd8jG2nmEHH9Rn3i1dKvP2PftcKqDUlo= X-Received: by 2002:a63:1203:: with SMTP id h3mr1140479pgl.273.1607505951447; Wed, 09 Dec 2020 01:25:51 -0800 (PST) MIME-Version: 1.0 References: <20201130151838.11208-1-songmuchun@bytedance.com> <20201130151838.11208-6-songmuchun@bytedance.com> <17abb7bb-de39-7580-b020-faec58032de9@redhat.com> <096ee806-b371-c22b-9066-8891935fbd5e@redhat.com> In-Reply-To: <096ee806-b371-c22b-9066-8891935fbd5e@redhat.com> From: Muchun Song Date: Wed, 9 Dec 2020 17:25:14 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() To: David Hildenbrand Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , Michal Hocko , "Song Bao Hua (Barry Song)" , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 9, 2020 at 4:50 PM David Hildenbrand wrote: > > On 09.12.20 08:36, Muchun Song wrote: > > On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand wrote: > >> > >> On 30.11.20 16:18, Muchun Song wrote: > >>> In the later patch, we can use the free_vmemmap_page() to free the > >>> unused vmemmap pages and initialize a page for vmemmap page using > >>> via prepare_vmemmap_page(). > >>> > >>> Signed-off-by: Muchun Song > >>> --- > >>> include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++ > >>> 1 file changed, 24 insertions(+) > >>> > >>> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h > >>> index 4ed6dee1adc9..239e3cc8f86c 100644 > >>> --- a/include/linux/bootmem_info.h > >>> +++ b/include/linux/bootmem_info.h > >>> @@ -3,6 +3,7 @@ > >>> #define __LINUX_BOOTMEM_INFO_H > >>> > >>> #include > >>> +#include > >>> > >>> /* > >>> * Types for free bootmem stored in page->lru.next. These have to be in > >>> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat); > >>> void get_page_bootmem(unsigned long info, struct page *page, > >>> unsigned long type); > >>> void put_page_bootmem(struct page *page); > >>> + > >>> +static inline void free_vmemmap_page(struct page *page) > >>> +{ > >>> + VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2); > >>> + > >>> + /* bootmem page has reserved flag in the reserve_bootmem_region */ > >>> + if (PageReserved(page)) { > >>> + unsigned long magic = (unsigned long)page->freelist; > >>> + > >>> + if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) > >>> + put_page_bootmem(page); > >>> + else > >>> + WARN_ON(1); > >>> + } > >>> +} > >>> + > >>> +static inline void prepare_vmemmap_page(struct page *page) > >>> +{ > >>> + unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page)); > >>> + > >>> + get_page_bootmem(section_nr, page, SECTION_INFO); > >>> + mark_page_reserved(page); > >>> +} > >> > >> Can you clarify in the description when exactly these functions are > >> called and on which type of pages? > >> > >> Would indicating "bootmem" in the function names make it clearer what we > >> are dealing with? > >> > >> E.g., any memory allocated via the memblock allocator and not via the > >> buddy will be makred reserved already in the memmap. It's unclear to me > >> why we need the mark_page_reserved() here - can you enlighten me? :) > > > > Sorry for ignoring this question. Because the vmemmap pages are allocated > > from the bootmem allocator which is marked as PG_reserved. For those bootmem > > pages, we should call put_page_bootmem for free. You can see that we > > clear the PG_reserved in the put_page_bootmem. In order to be consistent, > > the prepare_vmemmap_page also marks the page as PG_reserved. > > I don't think that really makes sense. > > After put_page_bootmem() put the last reference, it clears PG_reserved > and hands the page over to the buddy via free_reserved_page(). From that > point on, further get_page_bootmem() would be completely wrong and > dangerous. > > Both, put_page_bootmem() and get_page_bootmem() rely on the fact that > they are dealing with memblock allcoations - marked via PG_reserved. If > prepare_vmemmap_page() would be called on something that's *not* coming > from the memblock allocator, it would be completely broken - or am I > missing something? > > AFAIKT, there should rather be a BUG_ON(!PageReserved(page)) in > prepare_vmemmap_page() - or proper handling to deal with !memblock > allocations. > I want to allocate some pages as the vmemmap when we free a HugeTLB page to the buddy allocator. So I use the prepare_vmemmap_page() to initialize the page (which allocated from buddy allocator) and make it as the vmemmap of the freed HugeTLB page. Any suggestions to deal with this case? I have a solution to address this. When the pages allocated from the buddy as vmemmap pages, we do not call prepare_vmemmap_page(). When we free some vmemmap pages of a HugeTLB page, if the PG_reserved of the vmemmap page is set, we call free_vmemmap_page() to free it to buddy, otherwise call free_page(). What is your opinion? > > And as I said, indicating "bootmem" as part of the function names might > make it clearer that this is not for getting any vmemmap pages (esp. > allocated when hotplugging memory). Agree. I am doing that for the next version. > > -- > Thanks, > > David / dhildenb > -- Yours, Muchun