From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A902AC433B4 for ; Tue, 11 May 2021 10:23:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6931761937 for ; Tue, 11 May 2021 10:23:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231438AbhEKKY5 (ORCPT ); Tue, 11 May 2021 06:24:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:59828 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230481AbhEKKY4 (ORCPT ); Tue, 11 May 2021 06:24:56 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id EA42C61937 for ; Tue, 11 May 2021 10:23:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1620728629; bh=LgxMI8vj288DChfG7lFtdchD2fE1dwuejwGWa+sggCk=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=R+bdeO7MezX3F1V357elWgM4sW2CkCVIxT5pTuXNHE83llBmmysMYE1tPhjgxT5QO aJFMU2497UdjGOOwTRnyBD2OMm6mxSBZAKnFlXT82UDFCmr480DibvrwbWGAxADB2c JjJpSvsQE1Uo2vYSGpS8Atj/L0lFn8Y6SHyAK4ZeueE59BlcPNR1mUK9YF2mNKQQJm DKmqRYwotraRpe1onAVNgbXbiIV0Ktw9oRHtHgJt8Fc0JXzZt5L0vP1T8F6ehuLlli CHnIMaF9xCSZU0u2O7F5giJeKSJxMKr26C2osLg2kXY41OhksE0I60Oob3MMiuOIsf K1D4M9wUcqVaA== Received: by mail-oi1-f170.google.com with SMTP id d21so18542575oic.11 for ; Tue, 11 May 2021 03:23:49 -0700 (PDT) X-Gm-Message-State: AOAM5337CXUkWiEBpxR99zX5n8P4Wht4YaW2uvmcN9zD5bZVYVIyI2nI TsZPp/72aETOsiH3CQj3IHgzKT2prNfvO9pBxto= X-Google-Smtp-Source: ABdhPJyaKNLfHKqYziYG6xr+MMNfOFcPMyPRkI4YOXyAGKtGLduxWKsNWMu7qKDSpiPSpDef4WSFbSD7BiuPJbAkYdA= X-Received: by 2002:aca:4056:: with SMTP id n83mr2978231oia.47.1620728629131; Tue, 11 May 2021 03:23:49 -0700 (PDT) MIME-Version: 1.0 References: <20210511100550.28178-1-rppt@kernel.org> <20210511100550.28178-3-rppt@kernel.org> In-Reply-To: <20210511100550.28178-3-rppt@kernel.org> From: Ard Biesheuvel Date: Tue, 11 May 2021 12:23:38 +0200 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v4 2/4] memblock: update initialization of reserved pages To: Mike Rapoport Cc: Andrew Morton , Anshuman Khandual , Catalin Marinas , David Hildenbrand , Marc Zyngier , Mark Rutland , Mike Rapoport , Will Deacon , kvmarm , Linux ARM , Linux Kernel Mailing List , Linux Memory Management List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 11 May 2021 at 12:06, Mike Rapoport wrote: > > From: Mike Rapoport > > The struct pages representing a reserved memory region are initialized > using reserve_bootmem_range() function. This function is called for each > reserved region just before the memory is freed from memblock to the buddy > page allocator. > > The struct pages for MEMBLOCK_NOMAP regions are kept with the default > values set by the memory map initialization which makes it necessary to > have a special treatment for such pages in pfn_valid() and > pfn_valid_within(). > > Split out initialization of the reserved pages to a function with a > meaningful name and treat the MEMBLOCK_NOMAP regions the same way as the > reserved regions and mark struct pages for the NOMAP regions as > PageReserved. > > Signed-off-by: Mike Rapoport > Reviewed-by: David Hildenbrand > Reviewed-by: Anshuman Khandual Acked-by: Ard Biesheuvel > --- > include/linux/memblock.h | 4 +++- > mm/memblock.c | 28 ++++++++++++++++++++++++++-- > 2 files changed, 29 insertions(+), 3 deletions(-) > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > index 5984fff3f175..1b4c97c151ae 100644 > --- a/include/linux/memblock.h > +++ b/include/linux/memblock.h > @@ -30,7 +30,9 @@ extern unsigned long long max_possible_pfn; > * @MEMBLOCK_NONE: no special request > * @MEMBLOCK_HOTPLUG: hotpluggable region > * @MEMBLOCK_MIRROR: mirrored region > - * @MEMBLOCK_NOMAP: don't add to kernel direct mapping > + * @MEMBLOCK_NOMAP: don't add to kernel direct mapping and treat as > + * reserved in the memory map; refer to memblock_mark_nomap() description > + * for further details > */ > enum memblock_flags { > MEMBLOCK_NONE = 0x0, /* No special request */ > diff --git a/mm/memblock.c b/mm/memblock.c > index afaefa8fc6ab..3abf2c3fea7f 100644 > --- a/mm/memblock.c > +++ b/mm/memblock.c > @@ -906,6 +906,11 @@ int __init_memblock memblock_mark_mirror(phys_addr_t base, phys_addr_t size) > * @base: the base phys addr of the region > * @size: the size of the region > * > + * The memory regions marked with %MEMBLOCK_NOMAP will not be added to the > + * direct mapping of the physical memory. These regions will still be > + * covered by the memory map. The struct page representing NOMAP memory > + * frames in the memory map will be PageReserved() > + * > * Return: 0 on success, -errno on failure. > */ > int __init_memblock memblock_mark_nomap(phys_addr_t base, phys_addr_t size) > @@ -2002,6 +2007,26 @@ static unsigned long __init __free_memory_core(phys_addr_t start, > return end_pfn - start_pfn; > } > > +static void __init memmap_init_reserved_pages(void) > +{ > + struct memblock_region *region; > + phys_addr_t start, end; > + u64 i; > + > + /* initialize struct pages for the reserved regions */ > + for_each_reserved_mem_range(i, &start, &end) > + reserve_bootmem_region(start, end); > + > + /* and also treat struct pages for the NOMAP regions as PageReserved */ > + for_each_mem_region(region) { > + if (memblock_is_nomap(region)) { > + start = region->base; > + end = start + region->size; > + reserve_bootmem_region(start, end); > + } > + } > +} > + > static unsigned long __init free_low_memory_core_early(void) > { > unsigned long count = 0; > @@ -2010,8 +2035,7 @@ static unsigned long __init free_low_memory_core_early(void) > > memblock_clear_hotplug(0, -1); > > - for_each_reserved_mem_range(i, &start, &end) > - reserve_bootmem_region(start, end); > + memmap_init_reserved_pages(); > > /* > * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id > -- > 2.28.0 >