From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Mike Rapoport <rppt@kernel.org> Cc: <linux-arm-kernel@lists.infradead.org>, Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@linux.ibm.com>, Russell King <linux@armlinux.org.uk>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org> Subject: Re: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Date: Wed, 19 May 2021 09:50:46 +0800 [thread overview] Message-ID: <779d890b-6983-6138-4f74-eef7be0bbd4c@huawei.com> (raw) In-Reply-To: <YKPi0eBWsHBDZCg/@kernel.org> On 2021/5/18 23:52, Mike Rapoport wrote: > On Tue, May 18, 2021 at 08:49:43PM +0800, Kefeng Wang wrote: >> >> >> On 2021/5/18 17:06, Mike Rapoport wrote: >>> From: Mike Rapoport <rppt@linux.ibm.com> >>> >>> When unused memory map is freed the preserved part of the memory map is >>> extended to match pageblock boundaries because lots of core mm >>> functionality relies on homogeneity of the memory map within pageblock >>> boundaries. >>> >>> Since pfn_valid() is used to check whether there is a valid memory map >>> entry for a PFN, make it return true also for PFNs that have memory map >>> entries even if there is no actual memory populated there. >>> >>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> >>> --- >>> arch/arm/mm/init.c | 15 ++++++++++++++- >>> 1 file changed, 14 insertions(+), 1 deletion(-) >>> >>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c >>> index 9d4744a632c6..bb678c0ba143 100644 >>> --- a/arch/arm/mm/init.c >>> +++ b/arch/arm/mm/init.c >>> @@ -125,11 +125,24 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max_low, >>> int pfn_valid(unsigned long pfn) >>> { >>> phys_addr_t addr = __pfn_to_phys(pfn); >>> + unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages; >>> if (__phys_to_pfn(addr) != pfn) >>> return 0; >>> - return memblock_is_map_memory(addr); >>> + if (memblock_is_map_memory(addr)) >>> + return 1; >>> + >>> + /* >>> + * If address less than pageblock_size bytes away from a present >>> + * memory chunk there still will be a memory map entry for it >>> + * because we round freed memory map to the pageblock boundaries >>> + */ >>> + if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || >>> + memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) >>> + return 1; >> >> Hi Mike, with patch3, the system won't boot. > > Hmm, apparently I've miscalculated the ranges... > > Can you please check with the below patch on top of this series: Yes, it works, On node 0 totalpages: 311551 Normal zone: 1230 pages used for memmap Normal zone: 0 pages reserved Normal zone: 157440 pages, LIFO batch:31 Normal zone: 17152 pages in unavailable ranges HighMem zone: 154111 pages, LIFO batch:31 HighMem zone: 513 pages in unavailable ranges and the oom testcase could pass. Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com> There is memblock_is_region_reserved(check if a region intersects reserved memory), it also checks the size, should we add a similar func? > > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index bb678c0ba143..2fafbbc8e73b 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -138,8 +138,9 @@ int pfn_valid(unsigned long pfn) > * memory chunk there still will be a memory map entry for it > * because we round freed memory map to the pageblock boundaries > */ > - if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || > - memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) > + if (memblock_overlaps_region(&memblock.memory, > + ALIGN_DOWN(addr, pageblock_size), > + pageblock_size); > return 1; > > return 0; >
WARNING: multiple messages have this Message-ID (diff)
From: Kefeng Wang <wangkefeng.wang@huawei.com> To: Mike Rapoport <rppt@kernel.org> Cc: <linux-arm-kernel@lists.infradead.org>, Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@linux.ibm.com>, Russell King <linux@armlinux.org.uk>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org> Subject: Re: [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Date: Wed, 19 May 2021 09:50:46 +0800 [thread overview] Message-ID: <779d890b-6983-6138-4f74-eef7be0bbd4c@huawei.com> (raw) In-Reply-To: <YKPi0eBWsHBDZCg/@kernel.org> On 2021/5/18 23:52, Mike Rapoport wrote: > On Tue, May 18, 2021 at 08:49:43PM +0800, Kefeng Wang wrote: >> >> >> On 2021/5/18 17:06, Mike Rapoport wrote: >>> From: Mike Rapoport <rppt@linux.ibm.com> >>> >>> When unused memory map is freed the preserved part of the memory map is >>> extended to match pageblock boundaries because lots of core mm >>> functionality relies on homogeneity of the memory map within pageblock >>> boundaries. >>> >>> Since pfn_valid() is used to check whether there is a valid memory map >>> entry for a PFN, make it return true also for PFNs that have memory map >>> entries even if there is no actual memory populated there. >>> >>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> >>> --- >>> arch/arm/mm/init.c | 15 ++++++++++++++- >>> 1 file changed, 14 insertions(+), 1 deletion(-) >>> >>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c >>> index 9d4744a632c6..bb678c0ba143 100644 >>> --- a/arch/arm/mm/init.c >>> +++ b/arch/arm/mm/init.c >>> @@ -125,11 +125,24 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max_low, >>> int pfn_valid(unsigned long pfn) >>> { >>> phys_addr_t addr = __pfn_to_phys(pfn); >>> + unsigned long pageblock_size = PAGE_SIZE * pageblock_nr_pages; >>> if (__phys_to_pfn(addr) != pfn) >>> return 0; >>> - return memblock_is_map_memory(addr); >>> + if (memblock_is_map_memory(addr)) >>> + return 1; >>> + >>> + /* >>> + * If address less than pageblock_size bytes away from a present >>> + * memory chunk there still will be a memory map entry for it >>> + * because we round freed memory map to the pageblock boundaries >>> + */ >>> + if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || >>> + memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) >>> + return 1; >> >> Hi Mike, with patch3, the system won't boot. > > Hmm, apparently I've miscalculated the ranges... > > Can you please check with the below patch on top of this series: Yes, it works, On node 0 totalpages: 311551 Normal zone: 1230 pages used for memmap Normal zone: 0 pages reserved Normal zone: 157440 pages, LIFO batch:31 Normal zone: 17152 pages in unavailable ranges HighMem zone: 154111 pages, LIFO batch:31 HighMem zone: 513 pages in unavailable ranges and the oom testcase could pass. Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com> There is memblock_is_region_reserved(check if a region intersects reserved memory), it also checks the size, should we add a similar func? > > diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > index bb678c0ba143..2fafbbc8e73b 100644 > --- a/arch/arm/mm/init.c > +++ b/arch/arm/mm/init.c > @@ -138,8 +138,9 @@ int pfn_valid(unsigned long pfn) > * memory chunk there still will be a memory map entry for it > * because we round freed memory map to the pageblock boundaries > */ > - if (memblock_is_map_memory(ALIGN(addr + 1, pageblock_size)) || > - memblock_is_map_memory(ALIGN_DOWN(addr, pageblock_size))) > + if (memblock_overlaps_region(&memblock.memory, > + ALIGN_DOWN(addr, pageblock_size), > + pageblock_size); > return 1; > > return 0; > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-05-19 1:50 UTC|newest] Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-18 9:06 [PATCH 0/3] memblock, arm: fixes for freeing of the memory map Mike Rapoport 2021-05-18 9:06 ` Mike Rapoport 2021-05-18 9:06 ` [PATCH 1/3] memblock: free_unused_memmap: use pageblock units instead of MAX_ORDER Mike Rapoport 2021-05-18 9:06 ` Mike Rapoport 2021-05-18 9:06 ` [PATCH 2/3] memblock: align freed memory map on pageblock boundaries with SPARSEMEM Mike Rapoport 2021-05-18 9:06 ` Mike Rapoport 2021-05-18 9:06 ` [PATCH 3/3] arm: extend pfn_valid to take into accound freed memory map alignment Mike Rapoport 2021-05-18 9:06 ` Mike Rapoport 2021-05-18 9:44 ` Russell King (Oracle) 2021-05-18 9:44 ` Russell King (Oracle) 2021-05-18 10:53 ` Mike Rapoport 2021-05-18 10:53 ` Mike Rapoport 2021-05-18 12:49 ` Kefeng Wang 2021-05-18 12:49 ` Kefeng Wang 2021-05-18 15:52 ` Mike Rapoport 2021-05-18 15:52 ` Mike Rapoport 2021-05-19 1:50 ` Kefeng Wang [this message] 2021-05-19 1:50 ` Kefeng Wang 2021-05-19 13:25 ` Mike Rapoport 2021-05-19 13:25 ` Mike Rapoport
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=779d890b-6983-6138-4f74-eef7be0bbd4c@huawei.com \ --to=wangkefeng.wang@huawei.com \ --cc=akpm@linux-foundation.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux@armlinux.org.uk \ --cc=rppt@kernel.org \ --cc=rppt@linux.ibm.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.