From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4DCDC433DF for ; Wed, 19 Aug 2020 10:12:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9BB3C206B5 for ; Wed, 19 Aug 2020 10:12:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="T3/0LhiS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9BB3C206B5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F29D8D0021; Wed, 19 Aug 2020 06:12:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37D898D000E; Wed, 19 Aug 2020 06:12:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21DA48D0021; Wed, 19 Aug 2020 06:12:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id F3F4D8D000E for ; Wed, 19 Aug 2020 06:12:39 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A99898248047 for ; Wed, 19 Aug 2020 10:12:39 +0000 (UTC) X-FDA: 77166904038.11.spade83_5606cf727027 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id 75A80180F8B82 for ; Wed, 19 Aug 2020 10:12:39 +0000 (UTC) X-HE-Tag: spade83_5606cf727027 X-Filterd-Recvd-Size: 10159 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Aug 2020 10:12:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1597831958; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=02PQzB1kPDmfrqChvReYSzfnL6m1liJXDCfleAanZCo=; b=T3/0LhiSYAfATcFghzLR4mD8Nc0IURKvvIaMlBp4xHEArRr4BTqrImEWiuJqZqi5xRMfyY mfnRGStwaxoGP/QxnxUF2BacoCzNXW1zj1h7eWPNvGZwQsfdEXiCfrGLQlDxiYtji8WkU8 5QPrk45+9Ri2O7fhhxtV7lH5zRZVNho= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-533-589Q5jnAOJi5TQQAnyC9SA-1; Wed, 19 Aug 2020 06:12:34 -0400 X-MC-Unique: 589Q5jnAOJi5TQQAnyC9SA-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1863C186A566; Wed, 19 Aug 2020 10:12:32 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-11.ams2.redhat.com [10.36.114.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id DD47916597; Wed, 19 Aug 2020 10:12:28 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Michal Hocko , Wei Yang , Baoquan He , Pankaj Gupta , Oscar Salvador , Tony Luck , Fenghua Yu , Logan Gunthorpe , Dan Williams , Mike Rapoport , "Matthew Wilcox (Oracle)" , Michel Lespinasse , linux-ia64@vger.kernel.org Subject: [PATCH v1 10/11] mm: pass migratetype into memmap_init_zone() and move_pfn_range_to_zone() Date: Wed, 19 Aug 2020 12:11:56 +0200 Message-Id: <20200819101157.12723-11-david@redhat.com> In-Reply-To: <20200819101157.12723-1-david@redhat.com> References: <20200819101157.12723-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Rspamd-Queue-Id: 75A80180F8B82 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On the memory hotplug path, we want to start with MIGRATE_ISOLATE, to un-isolate the pages after memory onlining is complete. Let's allow passing in the migratetype. Cc: Andrew Morton Cc: Michal Hocko Cc: Wei Yang Cc: Baoquan He Cc: Pankaj Gupta Cc: Oscar Salvador Cc: Tony Luck Cc: Fenghua Yu Cc: Logan Gunthorpe Cc: Dan Williams Cc: Mike Rapoport Cc: "Matthew Wilcox (Oracle)" Cc: Michel Lespinasse Cc: linux-ia64@vger.kernel.org Signed-off-by: David Hildenbrand --- arch/ia64/mm/init.c | 4 ++-- include/linux/memory_hotplug.h | 3 ++- include/linux/mm.h | 3 ++- mm/memory_hotplug.c | 11 ++++++++--- mm/memremap.c | 3 ++- mm/page_alloc.c | 21 ++++++++++++--------- 6 files changed, 28 insertions(+), 17 deletions(-) diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index 0b3fb4c7af292..82b7a46ddd23d 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -538,7 +538,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg) if (map_start < map_end) memmap_init_zone((unsigned long)(map_end - map_start), args->nid, args->zone, page_to_pfn(map_start), - MEMMAP_EARLY, NULL); + MEMMAP_EARLY, NULL, MIGRATE_MOVABLE); return 0; } =20 @@ -548,7 +548,7 @@ memmap_init (unsigned long size, int nid, unsigned lo= ng zone, { if (!vmem_map) { memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY, - NULL); + NULL, MIGRATE_MOVABLE); } else { struct page *start; struct memmap_init_callback_data args; diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplu= g.h index 0b461691d1a49..cbafeda859380 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -346,7 +346,8 @@ extern int add_memory_resource(int nid, struct resour= ce *resource); extern int add_memory_driver_managed(int nid, u64 start, u64 size, const char *resource_name); extern void move_pfn_range_to_zone(struct zone *zone, unsigned long star= t_pfn, - unsigned long nr_pages, struct vmem_altmap *altmap); + unsigned long nr_pages, + struct vmem_altmap *altmap, int migratetype); extern void remove_pfn_range_from_zone(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages); diff --git a/include/linux/mm.h b/include/linux/mm.h index 8ab941cf73f44..c842aa2a97ba2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2409,7 +2409,8 @@ extern int __meminit __early_pfn_to_nid(unsigned lo= ng pfn, =20 extern void set_dma_reserve(unsigned long new_dma_reserve); extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned= long, - enum memmap_context, struct vmem_altmap *); + enum memmap_context, struct vmem_altmap *, + int migratetype); extern void setup_per_zone_wmarks(void); extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 3aba0d956f9b1..1c16a5def781e 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -693,9 +693,14 @@ static void __meminit resize_pgdat_range(struct pgli= st_data *pgdat, unsigned lon * Associate the pfn range with the given zone, initializing the memmaps * and resizing the pgdat/zone data to span the added pages. After this * call, all affected pages are PG_reserved. + * + * All aligned pageblocks are initialized to the specified migratetype + * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no relate= d + * zone stats (e.g., nr_isolate_pageblock) are touched. */ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start= _pfn, - unsigned long nr_pages, struct vmem_altmap *altmap) + unsigned long nr_pages, + struct vmem_altmap *altmap, int migratetype) { struct pglist_data *pgdat =3D zone->zone_pgdat; int nid =3D pgdat->node_id; @@ -720,7 +725,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone, = unsigned long start_pfn, * are reserved so nobody should be touching them so we should be safe */ memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, - MEMMAP_HOTPLUG, altmap); + MEMMAP_HOTPLUG, altmap, migratetype); =20 set_zone_contiguous(zone); } @@ -800,7 +805,7 @@ int __ref online_pages(unsigned long pfn, unsigned lo= ng nr_pages, =20 /* associate pfn range with the zone */ zone =3D zone_for_pfn_range(online_type, nid, pfn, nr_pages); - move_pfn_range_to_zone(zone, pfn, nr_pages, NULL); + move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_MOVABLE); =20 arg.start_pfn =3D pfn; arg.nr_pages =3D nr_pages; diff --git a/mm/memremap.c b/mm/memremap.c index 8afcc54c89286..04dc1f4ed634e 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -342,7 +342,8 @@ void *memremap_pages(struct dev_pagemap *pgmap, int n= id) =20 zone =3D &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; move_pfn_range_to_zone(zone, PHYS_PFN(res->start), - PHYS_PFN(resource_size(res)), params.altmap); + PHYS_PFN(resource_size(res)), + params.altmap, MIGRATE_MOVABLE); } =20 mem_hotplug_done(); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5db0b35f95e20..9f2dc61968689 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5970,10 +5970,15 @@ overlap_memmap_init(unsigned long zone, unsigned = long *pfn) * Initially all pages are reserved - free ones are freed * up by memblock_free_all() once the early boot process is * done. Non-atomic initialization, single-pass. + * + * All aligned pageblocks are initialized to the specified migratetype + * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no relate= d + * zone stats (e.g., nr_isolate_pageblock) are touched. */ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned lo= ng zone, - unsigned long start_pfn, enum memmap_context context, - struct vmem_altmap *altmap) + unsigned long start_pfn, + enum memmap_context context, + struct vmem_altmap *altmap, int migratetype) { unsigned long pfn, end_pfn =3D start_pfn + size; struct page *page; @@ -6017,14 +6022,12 @@ void __meminit memmap_init_zone(unsigned long siz= e, int nid, unsigned long zone, __SetPageReserved(page); =20 /* - * Mark the block movable so that blocks are reserved for - * movable at startup. This will force kernel allocations - * to reserve their blocks rather than leaking throughout - * the address space during boot when many long-lived - * kernel allocations are made. + * Usually, we want to mark the pageblock MIGRATE_MOVABLE, + * such that unmovable allocations won't be scattered all + * over the place during system boot. */ if (IS_ALIGNED(pfn, pageblock_nr_pages)) { - set_pageblock_migratetype(page, MIGRATE_MOVABLE); + set_pageblock_migratetype(page, migratetype); cond_resched(); } pfn++; @@ -6124,7 +6127,7 @@ void __meminit __weak memmap_init(unsigned long siz= e, int nid, if (end_pfn > start_pfn) { size =3D end_pfn - start_pfn; memmap_init_zone(size, nid, zone, start_pfn, - MEMMAP_EARLY, NULL); + MEMMAP_EARLY, NULL, MIGRATE_MOVABLE); } } } --=20 2.26.2