From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC200C432BE for ; Fri, 23 Jul 2021 12:52:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7FE3260E53 for ; Fri, 23 Jul 2021 12:52:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7FE3260E53 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2658E6B006C; Fri, 23 Jul 2021 08:52:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 215546B0070; Fri, 23 Jul 2021 08:52:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DD086B0071; Fri, 23 Jul 2021 08:52:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id E88256B006C for ; Fri, 23 Jul 2021 08:52:34 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 97C1E22C11 for ; Fri, 23 Jul 2021 12:52:34 +0000 (UTC) X-FDA: 78393841428.28.0C7654D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 36D7C100982B for ; Fri, 23 Jul 2021 12:52:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1627044753; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B42PT71OFcOHU13oIdGfdEVxCHB1ZfFvWKN8DAC6QB0=; b=FGj/Ot1pQzhSWiMtACHv2ps6gPX9IMjYhsDC3E4GdmHZGu+ESPBwXTCsqsMwlI3WaoLVQ4 CbqiBoH5/fBqzocS1L3Lbh+Jdrcq8ssvNn9+jFnmuIN8FbGkIBGUKApQ9CkQ06MhF8F9Yo MbobUfLuhYrgi6Rq3fU7npSBYskxAlk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-523-DM4XW64ZOx2BUVoH9IFa4w-1; Fri, 23 Jul 2021 08:52:32 -0400 X-MC-Unique: DM4XW64ZOx2BUVoH9IFa4w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9DD3D101C8A0; Fri, 23 Jul 2021 12:52:29 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-253.ams2.redhat.com [10.36.112.253]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4EF7818432; Fri, 23 Jul 2021 12:52:24 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Vitaly Kuznetsov , "Michael S. Tsirkin" , Jason Wang , Marek Kedzierski , Hui Zhu , Pankaj Gupta , Wei Yang , Oscar Salvador , Michal Hocko , Dan Williams , Anshuman Khandual , Dave Hansen , Vlastimil Babka , Mike Rapoport , "Rafael J. Wysocki" , Len Brown , Pavel Tatashin , Greg Kroah-Hartman , virtualization@lists.linux-foundation.org, linux-acpi@vger.kernel.org Subject: [PATCH v2 1/9] mm: track present early pages per zone Date: Fri, 23 Jul 2021 14:52:02 +0200 Message-Id: <20210723125210.29987-2-david@redhat.com> In-Reply-To: <20210723125210.29987-1-david@redhat.com> References: <20210723125210.29987-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="FGj/Ot1p"; spf=none (imf13.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam02 X-Stat-Signature: o4m69jxe67ge8dyg7tf1zdin85d9uzao X-Rspamd-Queue-Id: 36D7C100982B X-HE-Tag: 1627044754-647197 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For implementing a new memory onlining policy, which determines when to online memory blocks to ZONE_MOVABLE semi-automatically, we need the numb= er of present early (boot) pages -- present pages excluding hotplugged pages= . Let's track these pages per zone. Pass a page instead of the zone to adjust_present_page_count(), similar as adjust_managed_page_count() and derive the zone from the page. It's worth noting that a memory block to be offlined/onlined is either completely "early" or "not early". add_memory() and friends can only add complete memory blocks and we only online/offline complete (individual) memory blocks. Signed-off-by: David Hildenbrand --- drivers/base/memory.c | 14 +++++++------- include/linux/memory_hotplug.h | 2 +- include/linux/mmzone.h | 7 +++++++ mm/memory_hotplug.c | 14 +++++++++++--- mm/page_alloc.c | 3 +++ 5 files changed, 29 insertions(+), 11 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index aa31a21f33d7..86ec2dc82fc2 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -205,7 +205,8 @@ static int memory_block_online(struct memory_block *m= em) * now already properly populated. */ if (nr_vmemmap_pages) - adjust_present_page_count(zone, nr_vmemmap_pages); + adjust_present_page_count(pfn_to_page(start_pfn), + nr_vmemmap_pages); =20 return ret; } @@ -215,24 +216,23 @@ static int memory_block_offline(struct memory_block= *mem) unsigned long start_pfn =3D section_nr_to_pfn(mem->start_section_nr); unsigned long nr_pages =3D PAGES_PER_SECTION * sections_per_block; unsigned long nr_vmemmap_pages =3D mem->nr_vmemmap_pages; - struct zone *zone; int ret; =20 /* * Unaccount before offlining, such that unpopulated zone and kthreads * can properly be torn down in offline_pages(). */ - if (nr_vmemmap_pages) { - zone =3D page_zone(pfn_to_page(start_pfn)); - adjust_present_page_count(zone, -nr_vmemmap_pages); - } + if (nr_vmemmap_pages) + adjust_present_page_count(pfn_to_page(start_pfn), + -nr_vmemmap_pages); =20 ret =3D offline_pages(start_pfn + nr_vmemmap_pages, nr_pages - nr_vmemmap_pages); if (ret) { /* offline_pages() failed. Account back. */ if (nr_vmemmap_pages) - adjust_present_page_count(zone, nr_vmemmap_pages); + adjust_present_page_count(pfn_to_page(start_pfn), + nr_vmemmap_pages); return ret; } =20 diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplu= g.h index 068e3dcf4690..39b04e99a30e 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -95,7 +95,7 @@ static inline void zone_seqlock_init(struct zone *zone) extern int zone_grow_free_lists(struct zone *zone, unsigned long new_nr_= pages); extern int zone_grow_waitqueues(struct zone *zone, unsigned long nr_page= s); extern int add_one_highpage(struct page *page, int pfn, int bad_ppro); -extern void adjust_present_page_count(struct zone *zone, long nr_pages); +extern void adjust_present_page_count(struct page *page, long nr_pages); /* VM interface that may be used by firmware interface */ extern int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr= _pages, struct zone *zone); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index fcb535560028..6fbe59702bf2 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -540,6 +540,10 @@ struct zone { * is calculated as: * present_pages =3D spanned_pages - absent_pages(pages in holes); * + * present_early_pages is present pages existing within the zone + * located on memory available since early boot, excluding hotplugged + * memory. + * * managed_pages is present pages managed by the buddy system, which * is calculated as (reserved_pages includes pages allocated by the * bootmem allocator): @@ -572,6 +576,9 @@ struct zone { atomic_long_t managed_pages; unsigned long spanned_pages; unsigned long present_pages; +#if defined(CONFIG_MEMORY_HOTPLUG) + unsigned long present_early_pages; +#endif #ifdef CONFIG_CMA unsigned long cma_pages; #endif diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 388c8627f17f..65dbb30f81c2 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -724,8 +724,16 @@ struct zone *zone_for_pfn_range(int online_type, int= nid, * This function should only be called by memory_block_{online,offline}, * and {online,offline}_pages. */ -void adjust_present_page_count(struct zone *zone, long nr_pages) +void adjust_present_page_count(struct page *page, long nr_pages) { + struct zone *zone =3D page_zone(page); + + /* + * We only support onlining/offlining/adding/removing of complete + * memory blocks; therefore, either all is either early or hotplugged. + */ + if (early_section(__pfn_to_section(page_to_pfn(page)))) + zone->present_early_pages +=3D nr_pages; zone->present_pages +=3D nr_pages; zone->zone_pgdat->node_present_pages +=3D nr_pages; } @@ -826,7 +834,7 @@ int __ref online_pages(unsigned long pfn, unsigned lo= ng nr_pages, struct zone *z } =20 online_pages_range(pfn, nr_pages); - adjust_present_page_count(zone, nr_pages); + adjust_present_page_count(pfn_to_page(pfn), nr_pages); =20 node_states_set_node(nid, &arg); if (need_zonelists_rebuild) @@ -1704,7 +1712,7 @@ int __ref offline_pages(unsigned long start_pfn, un= signed long nr_pages) =20 /* removal success */ adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages); - adjust_present_page_count(zone, -nr_pages); + adjust_present_page_count(pfn_to_page(start_pfn), -nr_pages); =20 /* reinitialise watermarks and update pcp limits */ init_per_zone_wmark_min(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3e97e68aef7a..213728db3c01 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -7255,6 +7255,9 @@ static void __init calculate_node_totalpages(struct= pglist_data *pgdat, zone->zone_start_pfn =3D 0; zone->spanned_pages =3D size; zone->present_pages =3D real_size; +#if defined(CONFIG_MEMORY_HOTPLUG) + zone->present_early_pages =3D real_size; +#endif =20 totalpages +=3D size; realtotalpages +=3D real_size; --=20 2.31.1