From: Mike Rapoport <rppt@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Zhou Guanghui <zhouguanghui1@huawei.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
xuqiang36@huawei.com
Subject: Re: [PATCH] memblock: config the number of init memblock regions
Date: Wed, 11 May 2022 09:03:02 +0300 [thread overview]
Message-ID: <YntRlrwJeP40q6Hg@kernel.org> (raw)
In-Reply-To: <20220510185523.3f7479b8ffc49a8a7c17d328@linux-foundation.org>
On Tue, May 10, 2022 at 06:55:23PM -0700, Andrew Morton wrote:
> On Wed, 11 May 2022 01:05:30 +0000 Zhou Guanghui <zhouguanghui1@huawei.com> wrote:
>
> > During early boot, the number of memblocks may exceed 128(some memory
> > areas are not reported to the kernel due to test failures. As a result,
> > contiguous memory is divided into multiple parts for reporting). If
> > the size of the init memblock regions is exceeded before the array size
> > can be resized, the excess memory will be lost.
I'd like to see more details about how firmware creates that sparse memory
map in the changelog.
> >
> > ...
> >
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -89,6 +89,14 @@ config SPARSEMEM_VMEMMAP
> > pfn_to_page and page_to_pfn operations. This is the most
> > efficient option when sufficient kernel resources are available.
> >
> > +config MEMBLOCK_INIT_REGIONS
> > + int "Number of init memblock regions"
> > + range 128 1024
> > + default 128
> > + help
> > + The number of init memblock regions which used to track "memory" and
> > + "reserved" memblocks during early boot.
> > +
> > config HAVE_MEMBLOCK_PHYS_MAP
> > bool
> >
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index e4f03a6e8e56..6893d26b750e 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -22,7 +22,7 @@
> >
> > #include "internal.h"
> >
> > -#define INIT_MEMBLOCK_REGIONS 128
> > +#define INIT_MEMBLOCK_REGIONS CONFIG_MEMBLOCK_INIT_REGIONS
>
> Consistent naming would be nice - MEMBLOCK_INIT versus INIT_MEMBLOCK.
>
> Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
> config option? It appears that the overhead from this would be 60kB or
> so.
60k is not big, but using 1024 entries array for 2-4 memory banks on
systems that don't report that fragmented memory map is really a waste.
We can make this per platform opt-in, like INIT_MEMBLOCK_RESERVED_REGIONS ...
> Or zero if CONFIG_ARCH_KEEP_MEMBLOCK and CONFIG_MEMORY_HOTPLUG
> are cooperating.
... or add code that will discard unused parts of memblock arrays even if
CONFIG_ARCH_KEEP_MEMBLOCK=y.
--
Sincerely yours,
Mike.
next prev parent reply other threads:[~2022-05-11 6:03 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-11 1:05 [PATCH] memblock: config the number of init memblock regions Zhou Guanghui
2022-05-11 1:55 ` Andrew Morton
2022-05-11 6:03 ` Mike Rapoport [this message]
2022-05-12 2:46 ` Zhouguanghui (OS Kernel)
2022-05-12 6:28 ` Mike Rapoport
2022-05-25 16:44 ` Darren Hart
2022-05-25 17:12 ` Mike Rapoport
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YntRlrwJeP40q6Hg@kernel.org \
--to=rppt@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=xuqiang36@huawei.com \
--cc=zhouguanghui1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).