From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 384E0C433E0 for ; Sun, 28 Feb 2021 18:50:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6079764E99 for ; Sun, 28 Feb 2021 18:50:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6079764E99 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8D38B8D0027; Sun, 28 Feb 2021 13:50:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 884DA8D0019; Sun, 28 Feb 2021 13:50:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 79A098D0027; Sun, 28 Feb 2021 13:50:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0232.hostedemail.com [216.40.44.232]) by kanga.kvack.org (Postfix) with ESMTP id 5F8138D0019 for ; Sun, 28 Feb 2021 13:50:49 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1CD07181AF5D3 for ; Sun, 28 Feb 2021 18:50:49 +0000 (UTC) X-FDA: 77868568218.12.6B150F5 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf15.hostedemail.com (Postfix) with ESMTP id 0DAA5A0000FF for ; Sun, 28 Feb 2021 18:50:46 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id F4060AEE6; Sun, 28 Feb 2021 18:50:46 +0000 (UTC) Date: Sun, 28 Feb 2021 19:50:44 +0100 From: Oscar Salvador To: David Hildenbrand Cc: Andrew Morton , Michal Hocko , VlastimilBabkavbabka@suse.cz, pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual Subject: Re: [PATCH v2 1/7] mm,memory_hotplug: Allocate memmap from the added memory range Message-ID: <20210228185044.GA3929@localhost.localdomain> References: <20210209133854.17399-1-osalvador@suse.de> <20210209133854.17399-2-osalvador@suse.de> <60afb5ca-230e-265f-9579-dac66a152c33@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <60afb5ca-230e-265f-9579-dac66a152c33@redhat.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0DAA5A0000FF X-Stat-Signature: f31m6deqr6dpum5b9ayaw74odyb9hz6b Received-SPF: none (suse.de>: No applicable sender policy available) receiver=imf15; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1614538246-67194 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 25, 2021 at 07:58:01PM +0100, David Hildenbrand wrote: > > In this way, we have: > > > > (start_pfn, buddy_start_pfn - 1] = Initialized and PageReserved > > (buddy_start_pfn, end_pfn] = Initialized and sent to buddy > > nit: shouldn't it be > > [start_pfn, buddy_start_pfn - 1] > [buddy_start_pfn, end_pfn - 1] > > or > > [start_pfn, buddy_start_pfn) > [buddy_start_pfn, end_pfn) > > (I remember that "[" means inclusive and "(" means exclusive, I might be wrong :) ) > > I actually prefer the first variant. Let us go witht the first variant, I guess it is more clear. > > -static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) > > +static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages, > > + unsigned long buddy_start_pfn) > > { > > const unsigned long end_pfn = start_pfn + nr_pages; > > - unsigned long pfn; > > + unsigned long pfn = buddy_start_pfn; > > + > > + /* > > + * When using memmap_on_memory, the range might be unaligned as the > > + * first pfns are used for vmemmap pages. Align it in case we need to. > > + */ > > + if (pfn & ((1 << (MAX_ORDER - 1)) - 1)) { > > if (!IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES)) Will change > > > + (*online_page_callback)(pfn_to_page(pfn), pageblock_order); > > + pfn += 1 << pageblock_order; > > pfn += pageblock_nr_pages; > > Can you add a comment why we can be sure that we are off by a single pageblock? What about s390x where a MAX_ORDER_NR_PAGES == 4 * pageblock_nr_pages? > > Would it make thing simpler to just do a > > while (!IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES)) { > (*online_page_callback)(pfn_to_page(pfn), 0); > pfn++; > } Honestly, I did not spend much time thinking on other platforms other than arm64/x86_64. But I think that that would be the universal solution as we do not make any assumptions. I will replace it. > > +bool mhp_supports_memmap_on_memory(unsigned long size) > > +{ > > + return memmap_on_memory_enabled && > > + size == memory_block_size_bytes(); > > Regarding my other comments as reply to the other patches, I'd move all magic you have when trying to enable right here. Ok, will do. > > @@ -1613,7 +1658,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages) > > zone_pcp_disable(zone); > > /* set above range as isolated */ > > - ret = start_isolate_page_range(start_pfn, end_pfn, > > + ret = start_isolate_page_range(buddy_start_pfn, end_pfn, > > MIGRATE_MOVABLE, > > MEMORY_OFFLINE | REPORT_FAILURE); > > Did you take care to properly adjust undo_isolate_page_range() as well? I can't spot it. No, I did not. Good that you noticed :-) Will fix it up in the next version. > > +static int get_nr_vmemmap_pages_cb(struct memory_block *mem, void *arg) > > +{ > > + unsigned long *nr_vmemmap_pages = (unsigned long *)arg; > > + > > + *nr_vmemmap_pages += mem->nr_vmemmap_pages; > > + return mem->nr_vmemmap_pages; > > +} > > + > > I think you can do this easier, all you want to know is if there > is any block that has nr_vmemmap_pages set - and return the value. > > static int first_set_nr_vmemmap_pages_cb(struct memory_block *mem, void *arg) > { > /* If not set, continue with the next block. */ > return mem->nr_vmemmap_pages; > } Yeah, less code. Will fix it. > > ... > > + walk_memory_blocks(start, size, &nr_vmemmap_pages, > > + get_nr_vmemmap_pages_cb); > > ... > > mem->nr_vmemmap_pages = walk_memory_blocks(start ...) > > > > Looks quite promising, only a couple of things to fine-tune :) Thanks! Thanks for having a look, that is highly appreciated! Let us see if we can polish the minor things that are missing and target this for the next release. -- Oscar Salvador SUSE L3