* [PATCH] arm64: mm: free the initrd reserved memblock in a aligned manner
@ 2019-07-03 23:59 Yi Wang
2019-07-05 15:11 ` James Morse
0 siblings, 1 reply; 2+ messages in thread
From: Yi Wang @ 2019-07-03 23:59 UTC (permalink / raw)
To: catalin.marinas
Cc: wang.yi59, f.fainelli, jiang.xuexin, david, robin.murphy,
will.deacon, linux-kernel, rppt, xue.zhihong, hannes,
Junhua Huang, akpm, logang, linux-arm-kernel, ghackmann
From: Junhua Huang <huang.junhua@zte.com.cn>
We should free the reserved memblock in an aligned manner
because the initrd reserves the memblock in an aligned manner
in arm64_memblock_init().
Otherwise there are some fragments in memblock_reserved regions. e.g.:
/sys/kernel/debug/memblock # cat reserved
0: 0x0000000080080000..0x00000000817fafff
1: 0x0000000083400000..0x0000000083ffffff
2: 0x0000000090000000..0x000000009000407f
3: 0x00000000b0000000..0x00000000b000003f
4: 0x00000000b26184ea..0x00000000b2618fff
The fragments like the ranges from b0000000 to b000003f and
from b26184ea to b2618fff should be freed.
And we can do free_reserved_area() after memblock_free(),
as free_reserved_area() calls __free_pages(), once we've done
that it could be allocated somewhere else,
but memblock and iomem still say this is reserved memory.
Signed-off-by: Junhua Huang <huang.junhua@zte.com.cn>
---
arch/arm64/mm/init.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d2adffb81b5d..03774b8bd364 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -580,8 +580,13 @@ void free_initmem(void)
#ifdef CONFIG_BLK_DEV_INITRD
void __init free_initrd_mem(unsigned long start, unsigned long end)
{
+ unsigned long aligned_start, aligned_end;
+
+ aligned_start = __virt_to_phys(start) & PAGE_MASK;
+ aligned_end = PAGE_ALIGN(__virt_to_phys(end));
+ memblock_free(aligned_end, aligned_end - aligned_start);
free_reserved_area((void *)start, (void *)end, 0, "initrd");
- memblock_free(__virt_to_phys(start), end - start);
+
}
#endif
--
2.15.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] arm64: mm: free the initrd reserved memblock in a aligned manner
2019-07-03 23:59 [PATCH] arm64: mm: free the initrd reserved memblock in a aligned manner Yi Wang
@ 2019-07-05 15:11 ` James Morse
0 siblings, 0 replies; 2+ messages in thread
From: James Morse @ 2019-07-05 15:11 UTC (permalink / raw)
To: Yi Wang
Cc: f.fainelli, jiang.xuexin, david, catalin.marinas, will.deacon,
robin.murphy, linux-kernel, rppt, xue.zhihong, hannes,
Junhua Huang, akpm, logang, linux-arm-kernel, ghackmann
Hi,
On 04/07/2019 00:59, Yi Wang wrote:
> From: Junhua Huang <huang.junhua@zte.com.cn>
>
> We should free the reserved memblock in an aligned manner
> because the initrd reserves the memblock in an aligned manner
> in arm64_memblock_init().
> Otherwise there are some fragments in memblock_reserved regions. e.g.:
> /sys/kernel/debug/memblock # cat reserved
> 0: 0x0000000080080000..0x00000000817fafff
> 1: 0x0000000083400000..0x0000000083ffffff
> 2: 0x0000000090000000..0x000000009000407f
> 3: 0x00000000b0000000..0x00000000b000003f
> 4: 0x00000000b26184ea..0x00000000b2618fff
> The fragments like the ranges from b0000000 to b000003f and
> from b26184ea to b2618fff should be freed.
>
> And we can do free_reserved_area() after memblock_free(),
> as free_reserved_area() calls __free_pages(), once we've done
> that it could be allocated somewhere else,
> but memblock and iomem still say this is reserved memory.
>
> Signed-off-by: Junhua Huang <huang.junhua@zte.com.cn>
You need to add your own Signed-off-by after Junhua Huang's. This tells the maintainer
that you're providing the patch with the 'Developer's Certificate of Origin'. Details in
/Documentation/process/submitting-patches.rst.
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index d2adffb81b5d..03774b8bd364 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -580,8 +580,13 @@ void free_initmem(void)
> #ifdef CONFIG_BLK_DEV_INITRD
> void __init free_initrd_mem(unsigned long start, unsigned long end)
> {
> + unsigned long aligned_start, aligned_end;
> +
> + aligned_start = __virt_to_phys(start) & PAGE_MASK;
> + aligned_end = PAGE_ALIGN(__virt_to_phys(end));
> + memblock_free(aligned_end, aligned_end - aligned_start);
We're not free-ing the same memory as we reserved here!
(start/end typo)
> free_reserved_area((void *)start, (void *)end, 0, "initrd");
> - memblock_free(__virt_to_phys(start), end - start);
> +
(stray newline)
> }
> #endif
Thanks,
James
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-07-05 15:11 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-03 23:59 [PATCH] arm64: mm: free the initrd reserved memblock in a aligned manner Yi Wang
2019-07-05 15:11 ` James Morse
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).