From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63D28C56202 for ; Mon, 23 Nov 2020 12:08:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 02C1320781 for ; Mon, 23 Nov 2020 12:08:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="IiUZbN+T" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729257AbgKWMID (ORCPT ); Mon, 23 Nov 2020 07:08:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728701AbgKWMIC (ORCPT ); Mon, 23 Nov 2020 07:08:02 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF720C0613CF for ; Mon, 23 Nov 2020 04:08:01 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id w4so14049016pgg.13 for ; Mon, 23 Nov 2020 04:08:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1ZwLi5KRkc7YH23f9Wv8M0kLSaVg1GJ6vD+Tr6xYwho=; b=IiUZbN+TSyDav/TFnZlOF5BIoECPnwKi34wOtfJVV1hSgL+63lDNLN21fJQbf4qfTK kPJsh7mr3uR8ks+wZdJUQXpbGOaI17hhT5b+kxEmFN+7FTu3V4n3A4QEpTwdYQuU6Xdf gs4xdupO8MqvBj2FWG4FK38qeNXAMUKhG3VkJNaKEEsD8HOIpI44K4WdIS9TsDQrd+zg hUCAHvsszcnML/J61XtxKF5JHyhepM7V5vd4Eu4By5qvkhygu9T2Fx+E7E8Ht5ycDFKS uKAMckJC7GOTiLxKFZEg5qNJWunods6f001aPP+OudqhXD7Kk5uXwFxyiWwgT192KQR3 8cbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1ZwLi5KRkc7YH23f9Wv8M0kLSaVg1GJ6vD+Tr6xYwho=; b=Ulh5gCMb3oxysDsXxzc0uy2vI1rQyn24dFgSBlRRDOZEM6X3+MRdzQpdtJrzVd23Gu 3Ct+mT7MzmWHNxb7+0kwCxgA+9T07k7gmmw7cF+65r4gu8h17f0CLR5J2EPhgl81nOZJ nYGe1lxmMJy2BwHwq7yvIDtCMwquJwyZd+Ym0A8bpT6ZpI0nQ/H8D8UiZ+xmFmf3RwAk vr0uFdqb+FCffXfVX9ZAmNU6b7/D/zg5wXLYJB1YXIq2PO/BFpe/1PSOAreMx03jHRem aRt9AZl1w10Za8lQcaws9yh1i49hmGXcM0g1GjyTcIa5116hyJ/JJ+Zh/0g9QnvR9hWp pB2w== X-Gm-Message-State: AOAM533mxP7Y8zYBqqUeg/qy1zFwfwlwAmImm5C/htUTdsCzdtjHn9W8 dCaaAvBLerngkri35IX1fY84rOz6JeSOhoNvTtN43w== X-Google-Smtp-Source: ABdhPJyt/oW7jZAXAavquDzFIuIBuPiB0sOvy5NAGzdC1kKO5etcNhkUOoZ96fOKjlyJJA3BsgbAHkP+UpOtIt47xjE= X-Received: by 2002:a63:ff18:: with SMTP id k24mr26483211pgi.273.1606133281419; Mon, 23 Nov 2020 04:08:01 -0800 (PST) MIME-Version: 1.0 References: <20201120084202.GJ3200@dhcp22.suse.cz> <20201120131129.GO3200@dhcp22.suse.cz> <20201123074046.GB27488@dhcp22.suse.cz> <20201123094344.GG27488@dhcp22.suse.cz> <20201123104258.GJ27488@dhcp22.suse.cz> <20201123113208.GL27488@dhcp22.suse.cz> In-Reply-To: <20201123113208.GL27488@dhcp22.suse.cz> From: Muchun Song Date: Mon, 23 Nov 2020 20:07:23 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v5 00/21] Free some vmemmap pages of hugetlb page To: Michal Hocko Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , "Song Bao Hua (Barry Song)" , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 23, 2020 at 7:32 PM Michal Hocko wrote: > > On Mon 23-11-20 19:16:18, Muchun Song wrote: > > On Mon, Nov 23, 2020 at 6:43 PM Michal Hocko wrote: > > > > > > On Mon 23-11-20 18:36:33, Muchun Song wrote: > > > > On Mon, Nov 23, 2020 at 5:43 PM Michal Hocko wrote: > > > > > > > > > > On Mon 23-11-20 16:53:53, Muchun Song wrote: > > > > > > On Mon, Nov 23, 2020 at 3:40 PM Michal Hocko wrote: > > > > > > > > > > > > > > On Fri 20-11-20 23:44:26, Muchun Song wrote: > > > > > > > > On Fri, Nov 20, 2020 at 9:11 PM Michal Hocko wrote: > > > > > > > > > > > > > > > > > > On Fri 20-11-20 20:40:46, Muchun Song wrote: > > > > > > > > > > On Fri, Nov 20, 2020 at 4:42 PM Michal Hocko wrote: > > > > > > > > > > > > > > > > > > > > > > On Fri 20-11-20 14:43:04, Muchun Song wrote: > > > > > > > > > > > [...] > > > > > > > > > > > > > > > > > > > > > > Thanks for improving the cover letter and providing some numbers. I have > > > > > > > > > > > only glanced through the patchset because I didn't really have more time > > > > > > > > > > > to dive depply into them. > > > > > > > > > > > > > > > > > > > > > > Overall it looks promissing. To summarize. I would prefer to not have > > > > > > > > > > > the feature enablement controlled by compile time option and the kernel > > > > > > > > > > > command line option should be opt-in. I also do not like that freeing > > > > > > > > > > > the pool can trigger the oom killer or even shut the system down if no > > > > > > > > > > > oom victim is eligible. > > > > > > > > > > > > > > > > > > > > Hi Michal, > > > > > > > > > > > > > > > > > > > > I have replied to you about those questions on the other mail thread. > > > > > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > One thing that I didn't really get to think hard about is what is the > > > > > > > > > > > effect of vmemmap manipulation wrt pfn walkers. pfn_to_page can be > > > > > > > > > > > invalid when racing with the split. How do we enforce that this won't > > > > > > > > > > > blow up? > > > > > > > > > > > > > > > > > > > > This feature depends on the CONFIG_SPARSEMEM_VMEMMAP, > > > > > > > > > > in this case, the pfn_to_page can work. The return value of the > > > > > > > > > > pfn_to_page is actually the address of it's struct page struct. > > > > > > > > > > I can not figure out where the problem is. Can you describe the > > > > > > > > > > problem in detail please? Thanks. > > > > > > > > > > > > > > > > > > struct page returned by pfn_to_page might get invalid right when it is > > > > > > > > > returned because vmemmap could get freed up and the respective memory > > > > > > > > > released to the page allocator and reused for something else. See? > > > > > > > > > > > > > > > > If the HugeTLB page is already allocated from the buddy allocator, > > > > > > > > the struct page of the HugeTLB can be freed? Does this exist? > > > > > > > > > > > > > > Nope, struct pages only ever get deallocated when the respective memory > > > > > > > (they describe) is hotremoved via hotplug. > > > > > > > > > > > > > > > If yes, how to free the HugeTLB page to the buddy allocator > > > > > > > > (cannot access the struct page)? > > > > > > > > > > > > > > But I do not follow how that relates to my concern above. > > > > > > > > > > > > Sorry. I shouldn't understand your concerns. > > > > > > > > > > > > vmemmap pages page frame > > > > > > +-----------+ mapping to +-----------+ > > > > > > | | -------------> | 0 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 1 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 2 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 3 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 4 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 5 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 6 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 7 | > > > > > > +-----------+ +-----------+ > > > > > > > > > > > > In this patch series, we will free the page frame 2-7 to the > > > > > > buddy allocator. You mean that pfn_to_page can return invalid > > > > > > value when the pfn is the page frame 2-7? Thanks. > > > > > > > > > > No I really mean that pfn_to_page will give you a struct page pointer > > > > > from pages which you release from the vmemmap page tables. Those pages > > > > > might get reused as soon sa they are freed to the page allocator. > > > > > > > > We will remap vmemmap pages 2-7 (virtual addresses) to page > > > > frame 1. And then we free page frame 2-7 to the buddy allocator. > > > > > > And this doesn't really happen in an atomic fashion from the pfn walker > > > POV, right? So it is very well possible that > > > > Yeah, you are right. But it may not be a problem for HugeTLB pages. > > Because in most cases, we only read the tail struct page and get the > > head struct page through compound_head() when the pfn is within > > a HugeTLB range. Right? > > Many pfn walkers would encounter the head page first and then skip over > the rest. Those should be reasonably safe. But there is no guarantee and > the fact that you need a valid page->compound_head which might get > scribbled over once you have the struct page makes this extremely > subtle. In this patch series, we can guarantee that the page->compound_head is always valid. Because we reuse the first tail page. Maybe you need to look closer at this series. Thanks. > > -- > > SUSE Labs -- Yours, Muchun From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99F97C63798 for ; Mon, 23 Nov 2020 12:08:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D43DA2076E for ; Mon, 23 Nov 2020 12:08:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="IiUZbN+T" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D43DA2076E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BC7336B0073; Mon, 23 Nov 2020 07:08:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B78906B0074; Mon, 23 Nov 2020 07:08:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3F6B6B0075; Mon, 23 Nov 2020 07:08:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 78CE16B0073 for ; Mon, 23 Nov 2020 07:08:03 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0E3D3584D for ; Mon, 23 Nov 2020 12:08:03 +0000 (UTC) X-FDA: 77515559646.10.fruit08_4e08c0c27365 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id E441016A4A4 for ; Mon, 23 Nov 2020 12:08:02 +0000 (UTC) X-HE-Tag: fruit08_4e08c0c27365 X-Filterd-Recvd-Size: 10167 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Nov 2020 12:08:02 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id t37so13994809pga.7 for ; Mon, 23 Nov 2020 04:08:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1ZwLi5KRkc7YH23f9Wv8M0kLSaVg1GJ6vD+Tr6xYwho=; b=IiUZbN+TSyDav/TFnZlOF5BIoECPnwKi34wOtfJVV1hSgL+63lDNLN21fJQbf4qfTK kPJsh7mr3uR8ks+wZdJUQXpbGOaI17hhT5b+kxEmFN+7FTu3V4n3A4QEpTwdYQuU6Xdf gs4xdupO8MqvBj2FWG4FK38qeNXAMUKhG3VkJNaKEEsD8HOIpI44K4WdIS9TsDQrd+zg hUCAHvsszcnML/J61XtxKF5JHyhepM7V5vd4Eu4By5qvkhygu9T2Fx+E7E8Ht5ycDFKS uKAMckJC7GOTiLxKFZEg5qNJWunods6f001aPP+OudqhXD7Kk5uXwFxyiWwgT192KQR3 8cbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1ZwLi5KRkc7YH23f9Wv8M0kLSaVg1GJ6vD+Tr6xYwho=; b=Qtl1aUWbOX1R0CBCpdRR8V9CbpLMdPm5xMtYMQ7x5uRpbII19C/o+gFzDBfuJiffRg gV3e27UjI9RTpqgCn0zcMEDBHsHyI3pYRcShDC829BRJwqcM9uTphV92g30uqkYXA1sb tikHsvcS1WKQTmvObMlfSB2ZZSW0X6eDedxEnk4VrqJxP3EM9WPWDrKsYT8j3M814+yh mSg6RIUUjr77bmjITQOktGBbsOsTYTwIMsY8IwIChVdjZuCh9RryTzBxeyTbwLt5LGsM oRlLfBcCilrvY1Ystxt+IR+07T7096mqyFV+ztAfgHXCCa1T56uIWmO2J9DircfARcX6 zI/A== X-Gm-Message-State: AOAM5309ZcZS3bR5+I83li6SYF31WT53KHAJ8J5inxOO6NHTDgo7J4Ze ZLXbl3R+GefQNV/vR3//hLkx6h0d5vklLnoi3P8prQ== X-Google-Smtp-Source: ABdhPJyt/oW7jZAXAavquDzFIuIBuPiB0sOvy5NAGzdC1kKO5etcNhkUOoZ96fOKjlyJJA3BsgbAHkP+UpOtIt47xjE= X-Received: by 2002:a63:ff18:: with SMTP id k24mr26483211pgi.273.1606133281419; Mon, 23 Nov 2020 04:08:01 -0800 (PST) MIME-Version: 1.0 References: <20201120084202.GJ3200@dhcp22.suse.cz> <20201120131129.GO3200@dhcp22.suse.cz> <20201123074046.GB27488@dhcp22.suse.cz> <20201123094344.GG27488@dhcp22.suse.cz> <20201123104258.GJ27488@dhcp22.suse.cz> <20201123113208.GL27488@dhcp22.suse.cz> In-Reply-To: <20201123113208.GL27488@dhcp22.suse.cz> From: Muchun Song Date: Mon, 23 Nov 2020 20:07:23 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v5 00/21] Free some vmemmap pages of hugetlb page To: Michal Hocko Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , "Song Bao Hua (Barry Song)" , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Nov 23, 2020 at 7:32 PM Michal Hocko wrote: > > On Mon 23-11-20 19:16:18, Muchun Song wrote: > > On Mon, Nov 23, 2020 at 6:43 PM Michal Hocko wrote: > > > > > > On Mon 23-11-20 18:36:33, Muchun Song wrote: > > > > On Mon, Nov 23, 2020 at 5:43 PM Michal Hocko wrote: > > > > > > > > > > On Mon 23-11-20 16:53:53, Muchun Song wrote: > > > > > > On Mon, Nov 23, 2020 at 3:40 PM Michal Hocko wrote: > > > > > > > > > > > > > > On Fri 20-11-20 23:44:26, Muchun Song wrote: > > > > > > > > On Fri, Nov 20, 2020 at 9:11 PM Michal Hocko wrote: > > > > > > > > > > > > > > > > > > On Fri 20-11-20 20:40:46, Muchun Song wrote: > > > > > > > > > > On Fri, Nov 20, 2020 at 4:42 PM Michal Hocko wrote: > > > > > > > > > > > > > > > > > > > > > > On Fri 20-11-20 14:43:04, Muchun Song wrote: > > > > > > > > > > > [...] > > > > > > > > > > > > > > > > > > > > > > Thanks for improving the cover letter and providing some numbers. I have > > > > > > > > > > > only glanced through the patchset because I didn't really have more time > > > > > > > > > > > to dive depply into them. > > > > > > > > > > > > > > > > > > > > > > Overall it looks promissing. To summarize. I would prefer to not have > > > > > > > > > > > the feature enablement controlled by compile time option and the kernel > > > > > > > > > > > command line option should be opt-in. I also do not like that freeing > > > > > > > > > > > the pool can trigger the oom killer or even shut the system down if no > > > > > > > > > > > oom victim is eligible. > > > > > > > > > > > > > > > > > > > > Hi Michal, > > > > > > > > > > > > > > > > > > > > I have replied to you about those questions on the other mail thread. > > > > > > > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > One thing that I didn't really get to think hard about is what is the > > > > > > > > > > > effect of vmemmap manipulation wrt pfn walkers. pfn_to_page can be > > > > > > > > > > > invalid when racing with the split. How do we enforce that this won't > > > > > > > > > > > blow up? > > > > > > > > > > > > > > > > > > > > This feature depends on the CONFIG_SPARSEMEM_VMEMMAP, > > > > > > > > > > in this case, the pfn_to_page can work. The return value of the > > > > > > > > > > pfn_to_page is actually the address of it's struct page struct. > > > > > > > > > > I can not figure out where the problem is. Can you describe the > > > > > > > > > > problem in detail please? Thanks. > > > > > > > > > > > > > > > > > > struct page returned by pfn_to_page might get invalid right when it is > > > > > > > > > returned because vmemmap could get freed up and the respective memory > > > > > > > > > released to the page allocator and reused for something else. See? > > > > > > > > > > > > > > > > If the HugeTLB page is already allocated from the buddy allocator, > > > > > > > > the struct page of the HugeTLB can be freed? Does this exist? > > > > > > > > > > > > > > Nope, struct pages only ever get deallocated when the respective memory > > > > > > > (they describe) is hotremoved via hotplug. > > > > > > > > > > > > > > > If yes, how to free the HugeTLB page to the buddy allocator > > > > > > > > (cannot access the struct page)? > > > > > > > > > > > > > > But I do not follow how that relates to my concern above. > > > > > > > > > > > > Sorry. I shouldn't understand your concerns. > > > > > > > > > > > > vmemmap pages page frame > > > > > > +-----------+ mapping to +-----------+ > > > > > > | | -------------> | 0 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 1 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 2 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 3 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 4 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 5 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 6 | > > > > > > +-----------+ +-----------+ > > > > > > | | -------------> | 7 | > > > > > > +-----------+ +-----------+ > > > > > > > > > > > > In this patch series, we will free the page frame 2-7 to the > > > > > > buddy allocator. You mean that pfn_to_page can return invalid > > > > > > value when the pfn is the page frame 2-7? Thanks. > > > > > > > > > > No I really mean that pfn_to_page will give you a struct page pointer > > > > > from pages which you release from the vmemmap page tables. Those pages > > > > > might get reused as soon sa they are freed to the page allocator. > > > > > > > > We will remap vmemmap pages 2-7 (virtual addresses) to page > > > > frame 1. And then we free page frame 2-7 to the buddy allocator. > > > > > > And this doesn't really happen in an atomic fashion from the pfn walker > > > POV, right? So it is very well possible that > > > > Yeah, you are right. But it may not be a problem for HugeTLB pages. > > Because in most cases, we only read the tail struct page and get the > > head struct page through compound_head() when the pfn is within > > a HugeTLB range. Right? > > Many pfn walkers would encounter the head page first and then skip over > the rest. Those should be reasonably safe. But there is no guarantee and > the fact that you need a valid page->compound_head which might get > scribbled over once you have the struct page makes this extremely > subtle. In this patch series, we can guarantee that the page->compound_head is always valid. Because we reuse the first tail page. Maybe you need to look closer at this series. Thanks. > > -- > > SUSE Labs -- Yours, Muchun