linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
@ 2020-10-29 11:03 Ard Biesheuvel
  2020-10-29 11:14 ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2020-10-29 11:03 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: linus.walleij, linux, stable, Ard Biesheuvel

free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system. However, as it rounds the end of each region downwards, we
may end up freeing a page that is memblock_reserve()d, resulting
in memory corruption. So align the end of the range to the next
page instead.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm/mm/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index a391804c7ce3..d41781cb5496 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -354,7 +354,7 @@ static void __init free_highpages(void)
 	for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
 				&range_start, &range_end, NULL) {
 		unsigned long start = PHYS_PFN(range_start);
-		unsigned long end = PHYS_PFN(range_end);
+		unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
 
 		/* Ignore complete lowmem entries */
 		if (end <= max_low)
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-29 11:03 [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations Ard Biesheuvel
@ 2020-10-29 11:14 ` Ard Biesheuvel
  2020-10-30  2:25   ` Florian Fainelli
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2020-10-29 11:14 UTC (permalink / raw)
  To: Linux ARM; +Cc: Linus Walleij, Russell King, # 3.4.x

On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> free_highpages() iterates over the free memblock regions in high
> memory, and marks each page as available for the memory management
> system. However, as it rounds the end of each region downwards, we
> may end up freeing a page that is memblock_reserve()d, resulting
> in memory corruption. So align the end of the range to the next
> page instead.
>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  arch/arm/mm/init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index a391804c7ce3..d41781cb5496 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>                                 &range_start, &range_end, NULL) {
>                 unsigned long start = PHYS_PFN(range_start);
> -               unsigned long end = PHYS_PFN(range_end);
> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
>

Apologies, this should be

-               unsigned long start = PHYS_PFN(range_start);
+               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
                unsigned long end = PHYS_PFN(range_end);


Strangely enough, the wrong version above also fixed the issue I was
seeing, but it is start that needs rounding up, not end.

>                 /* Ignore complete lowmem entries */
>                 if (end <= max_low)
> --
> 2.17.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-29 11:14 ` Ard Biesheuvel
@ 2020-10-30  2:25   ` Florian Fainelli
  2020-10-30  9:29     ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Florian Fainelli @ 2020-10-30  2:25 UTC (permalink / raw)
  To: Ard Biesheuvel, Linux ARM; +Cc: Linus Walleij, Russell King, # 3.4.x



On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
>>
>> free_highpages() iterates over the free memblock regions in high
>> memory, and marks each page as available for the memory management
>> system. However, as it rounds the end of each region downwards, we
>> may end up freeing a page that is memblock_reserve()d, resulting
>> in memory corruption. So align the end of the range to the next
>> page instead.
>>
>> Cc: <stable@vger.kernel.org>
>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>> ---
>>  arch/arm/mm/init.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
>> index a391804c7ce3..d41781cb5496 100644
>> --- a/arch/arm/mm/init.c
>> +++ b/arch/arm/mm/init.c
>> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
>>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>>                                 &range_start, &range_end, NULL) {
>>                 unsigned long start = PHYS_PFN(range_start);
>> -               unsigned long end = PHYS_PFN(range_end);
>> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
>>
> 
> Apologies, this should be
> 
> -               unsigned long start = PHYS_PFN(range_start);
> +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
>                 unsigned long end = PHYS_PFN(range_end);
> 
> 
> Strangely enough, the wrong version above also fixed the issue I was
> seeing, but it is start that needs rounding up, not end.

Is there a particular commit that you identified which could be used as
 Fixes: tag to ease the back porting of such a change?
-- 
Florian

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-30  2:25   ` Florian Fainelli
@ 2020-10-30  9:29     ` Ard Biesheuvel
  2020-10-30 15:18       ` Mike Rapoport
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2020-10-30  9:29 UTC (permalink / raw)
  To: Florian Fainelli, Mike Rapoport
  Cc: # 3.4.x, Linus Walleij, Russell King, Linux ARM

(+ Mike)

On Fri, 30 Oct 2020 at 03:25, Florian Fainelli <f.fainelli@gmail.com> wrote:
>
>
>
> On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> >>
> >> free_highpages() iterates over the free memblock regions in high
> >> memory, and marks each page as available for the memory management
> >> system. However, as it rounds the end of each region downwards, we
> >> may end up freeing a page that is memblock_reserve()d, resulting
> >> in memory corruption. So align the end of the range to the next
> >> page instead.
> >>
> >> Cc: <stable@vger.kernel.org>
> >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> >> ---
> >>  arch/arm/mm/init.c | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> >> index a391804c7ce3..d41781cb5496 100644
> >> --- a/arch/arm/mm/init.c
> >> +++ b/arch/arm/mm/init.c
> >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> >>                                 &range_start, &range_end, NULL) {
> >>                 unsigned long start = PHYS_PFN(range_start);
> >> -               unsigned long end = PHYS_PFN(range_end);
> >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> >>
> >
> > Apologies, this should be
> >
> > -               unsigned long start = PHYS_PFN(range_start);
> > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> >                 unsigned long end = PHYS_PFN(range_end);
> >
> >
> > Strangely enough, the wrong version above also fixed the issue I was
> > seeing, but it is start that needs rounding up, not end.
>
> Is there a particular commit that you identified which could be used as
>  Fixes: tag to ease the back porting of such a change?

Ah hold on. This appears to be a very recent regression, in
cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.

The old code was

unsigned long start = memblock_region_memory_base_pfn(mem);

which uses PFN_UP() to round up, whereas the new code rounds down.

Looks like this is broken on a lot of platforms.

Mike?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-30  9:29     ` Ard Biesheuvel
@ 2020-10-30 15:18       ` Mike Rapoport
  2020-10-30 15:22         ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Mike Rapoport @ 2020-10-30 15:18 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: # 3.4.x, Linus Walleij, Florian Fainelli, Russell King, Linux ARM

Hi Ard,

On Fri, Oct 30, 2020 at 10:29:16AM +0100, Ard Biesheuvel wrote:
> (+ Mike)
> 
> On Fri, 30 Oct 2020 at 03:25, Florian Fainelli <f.fainelli@gmail.com> wrote:
> >
> >
> >
> > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> > >>
> > >> free_highpages() iterates over the free memblock regions in high
> > >> memory, and marks each page as available for the memory management
> > >> system. However, as it rounds the end of each region downwards, we
> > >> may end up freeing a page that is memblock_reserve()d, resulting
> > >> in memory corruption. So align the end of the range to the next
> > >> page instead.
> > >>
> > >> Cc: <stable@vger.kernel.org>
> > >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > >> ---
> > >>  arch/arm/mm/init.c | 2 +-
> > >>  1 file changed, 1 insertion(+), 1 deletion(-)
> > >>
> > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > >> index a391804c7ce3..d41781cb5496 100644
> > >> --- a/arch/arm/mm/init.c
> > >> +++ b/arch/arm/mm/init.c
> > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > >>                                 &range_start, &range_end, NULL) {
> > >>                 unsigned long start = PHYS_PFN(range_start);
> > >> -               unsigned long end = PHYS_PFN(range_end);
> > >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > >>
> > >
> > > Apologies, this should be
> > >
> > > -               unsigned long start = PHYS_PFN(range_start);
> > > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > >                 unsigned long end = PHYS_PFN(range_end);
> > >
> > >
> > > Strangely enough, the wrong version above also fixed the issue I was
> > > seeing, but it is start that needs rounding up, not end.
> >
> > Is there a particular commit that you identified which could be used as
> >  Fixes: tag to ease the back porting of such a change?
> 
> Ah hold on. This appears to be a very recent regression, in
> cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> 
> The old code was
> 
> unsigned long start = memblock_region_memory_base_pfn(mem);
> 
> which uses PFN_UP() to round up, whereas the new code rounds down.
> 
> Looks like this is broken on a lot of platforms.
> 
> Mike?

I've reviewed again the whole series and it seems that only highmem
initialization on arm and xtensa (that copied this code from arm) have
this problem. I might have missed something again, though.

So, to restore the original behaviour I think the fix should be

	for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
				&range_start, &range_end, NULL) {
		unsigned long start = PHYS_UP(range_start);
		unsigned long end = PHYS_DOWN(range_end);


-- 
Sincerely yours,
Mike.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-30 15:18       ` Mike Rapoport
@ 2020-10-30 15:22         ` Ard Biesheuvel
  2020-10-30 21:31           ` Mike Rapoport
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2020-10-30 15:22 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: # 3.4.x, Linus Walleij, Florian Fainelli, Russell King, Linux ARM

On Fri, 30 Oct 2020 at 16:18, Mike Rapoport <rppt@linux.ibm.com> wrote:
>
> Hi Ard,
>
> On Fri, Oct 30, 2020 at 10:29:16AM +0100, Ard Biesheuvel wrote:
> > (+ Mike)
> >
> > On Fri, 30 Oct 2020 at 03:25, Florian Fainelli <f.fainelli@gmail.com> wrote:
> > >
> > >
> > >
> > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> > > >>
> > > >> free_highpages() iterates over the free memblock regions in high
> > > >> memory, and marks each page as available for the memory management
> > > >> system. However, as it rounds the end of each region downwards, we
> > > >> may end up freeing a page that is memblock_reserve()d, resulting
> > > >> in memory corruption. So align the end of the range to the next
> > > >> page instead.
> > > >>
> > > >> Cc: <stable@vger.kernel.org>
> > > >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > >> ---
> > > >>  arch/arm/mm/init.c | 2 +-
> > > >>  1 file changed, 1 insertion(+), 1 deletion(-)
> > > >>
> > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > >> index a391804c7ce3..d41781cb5496 100644
> > > >> --- a/arch/arm/mm/init.c
> > > >> +++ b/arch/arm/mm/init.c
> > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > > >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > > >>                                 &range_start, &range_end, NULL) {
> > > >>                 unsigned long start = PHYS_PFN(range_start);
> > > >> -               unsigned long end = PHYS_PFN(range_end);
> > > >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > > >>
> > > >
> > > > Apologies, this should be
> > > >
> > > > -               unsigned long start = PHYS_PFN(range_start);
> > > > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > > >                 unsigned long end = PHYS_PFN(range_end);
> > > >
> > > >
> > > > Strangely enough, the wrong version above also fixed the issue I was
> > > > seeing, but it is start that needs rounding up, not end.
> > >
> > > Is there a particular commit that you identified which could be used as
> > >  Fixes: tag to ease the back porting of such a change?
> >
> > Ah hold on. This appears to be a very recent regression, in
> > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> >
> > The old code was
> >
> > unsigned long start = memblock_region_memory_base_pfn(mem);
> >
> > which uses PFN_UP() to round up, whereas the new code rounds down.
> >
> > Looks like this is broken on a lot of platforms.
> >
> > Mike?
>
> I've reviewed again the whole series and it seems that only highmem
> initialization on arm and xtensa (that copied this code from arm) have
> this problem. I might have missed something again, though.
>
> So, to restore the original behaviour I think the fix should be
>
>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>                                 &range_start, &range_end, NULL) {
>                 unsigned long start = PHYS_UP(range_start);
>                 unsigned long end = PHYS_DOWN(range_end);
>
>

PHYS_UP and PHYS_DOWN don't exist.

Could you please send a patch that fixes this everywhere where it's broken?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-30 15:22         ` Ard Biesheuvel
@ 2020-10-30 21:31           ` Mike Rapoport
  2020-10-30 21:41             ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Mike Rapoport @ 2020-10-30 21:31 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: # 3.4.x, Linus Walleij, Florian Fainelli, Russell King, Linux ARM

On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote:
> On Fri, 30 Oct 2020 at 16:18, Mike Rapoport <rppt@linux.ibm.com> wrote:
> >
> > Hi Ard,
> >
> > > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> > > > >>
> > > > >> free_highpages() iterates over the free memblock regions in high
> > > > >> memory, and marks each page as available for the memory management
> > > > >> system. However, as it rounds the end of each region downwards, we
> > > > >> may end up freeing a page that is memblock_reserve()d, resulting
> > > > >> in memory corruption. So align the end of the range to the next
> > > > >> page instead.
> > > > >>
> > > > >> Cc: <stable@vger.kernel.org>
> > > > >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > > >> ---
> > > > >>  arch/arm/mm/init.c | 2 +-
> > > > >>  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >>
> > > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > > >> index a391804c7ce3..d41781cb5496 100644
> > > > >> --- a/arch/arm/mm/init.c
> > > > >> +++ b/arch/arm/mm/init.c
> > > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > > > >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > > > >>                                 &range_start, &range_end, NULL) {
> > > > >>                 unsigned long start = PHYS_PFN(range_start);
> > > > >> -               unsigned long end = PHYS_PFN(range_end);
> > > > >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > > > >>
> > > > >
> > > > > Apologies, this should be
> > > > >
> > > > > -               unsigned long start = PHYS_PFN(range_start);
> > > > > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > > > >                 unsigned long end = PHYS_PFN(range_end);
> > > > >
> > > > >
> > > > > Strangely enough, the wrong version above also fixed the issue I was
> > > > > seeing, but it is start that needs rounding up, not end.
> > > >
> > > > Is there a particular commit that you identified which could be used as
> > > >  Fixes: tag to ease the back porting of such a change?
> > >
> > > Ah hold on. This appears to be a very recent regression, in
> > > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> > >
> > > The old code was
> > >
> > > unsigned long start = memblock_region_memory_base_pfn(mem);
> > >
> > > which uses PFN_UP() to round up, whereas the new code rounds down.
> > >
> > > Looks like this is broken on a lot of platforms.
> > >
> > > Mike?
> >
> > I've reviewed again the whole series and it seems that only highmem
> > initialization on arm and xtensa (that copied this code from arm) have
> > this problem. I might have missed something again, though.
> >
> > So, to restore the original behaviour I think the fix should be
> >
> >         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> >                                 &range_start, &range_end, NULL) {
> >                 unsigned long start = PHYS_UP(range_start);
> >                 unsigned long end = PHYS_DOWN(range_end);
> >
> >
> 
> PHYS_UP and PHYS_DOWN don't exist.
> 
> Could you please send a patch that fixes this everywhere where it's broken?

Argh, this should have been PFN_{UP,DOWN}.
With the patch below qemu-system-arm boots for me. Does it fix your
setup as well?

I kept your authorship as you did the heavy lifting here :)

With acks from ARM and xtensa maintainers I can take it via memblock
tree.

From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001
From: Ard Biesheuvel <ardb@kernel.org>
Date: Fri, 30 Oct 2020 22:53:02 +0200
Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned
 memory reservations

free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.

Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.

However, after that commit free_highmem() rounds the beginning and end of
each region downwards, we and may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.

Restore the original rounding of the region boundaries to avoid freeing
reserved pages.

Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by:  Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/arm/mm/init.c    | 4 ++--
 arch/xtensa/mm/init.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index d57112a276f5..c23dbf8bebee 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -354,8 +354,8 @@ static void __init free_highpages(void)
 	/* set highmem page free */
 	for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
 				&range_start, &range_end, NULL) {
-		unsigned long start = PHYS_PFN(range_start);
-		unsigned long end = PHYS_PFN(range_end);
+		unsigned long start = PFN_UP(range_start);
+		unsigned long end = PFN_DOWN(range_end);
 
 		/* Ignore complete lowmem entries */
 		if (end <= max_low)
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index c6fc83efee0c..8731b7ad9308 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -89,8 +89,8 @@ static void __init free_highpages(void)
 	/* set highmem page free */
 	for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
 				&range_start, &range_end, NULL) {
-		unsigned long start = PHYS_PFN(range_start);
-		unsigned long end = PHYS_PFN(range_end);
+		unsigned long start = PFN_UP(range_start);
+		unsigned long end = PFN_DOWN(range_end);
 
 		/* Ignore complete lowmem entries */
 		if (end <= max_low)
-- 
2.28.0


-- 
Sincerely yours,
Mike.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
  2020-10-30 21:31           ` Mike Rapoport
@ 2020-10-30 21:41             ` Ard Biesheuvel
  0 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2020-10-30 21:41 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: # 3.4.x, Linus Walleij, Florian Fainelli, Russell King, Linux ARM

On Fri, 30 Oct 2020 at 22:32, Mike Rapoport <rppt@linux.ibm.com> wrote:
>
> On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote:
> > On Fri, 30 Oct 2020 at 16:18, Mike Rapoport <rppt@linux.ibm.com> wrote:
> > >
> > > Hi Ard,
> > >
> > > > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> > > > > >>
> > > > > >> free_highpages() iterates over the free memblock regions in high
> > > > > >> memory, and marks each page as available for the memory management
> > > > > >> system. However, as it rounds the end of each region downwards, we
> > > > > >> may end up freeing a page that is memblock_reserve()d, resulting
> > > > > >> in memory corruption. So align the end of the range to the next
> > > > > >> page instead.
> > > > > >>
> > > > > >> Cc: <stable@vger.kernel.org>
> > > > > >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > > > >> ---
> > > > > >>  arch/arm/mm/init.c | 2 +-
> > > > > >>  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > > >>
> > > > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > > > >> index a391804c7ce3..d41781cb5496 100644
> > > > > >> --- a/arch/arm/mm/init.c
> > > > > >> +++ b/arch/arm/mm/init.c
> > > > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > > > > >>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > > > > >>                                 &range_start, &range_end, NULL) {
> > > > > >>                 unsigned long start = PHYS_PFN(range_start);
> > > > > >> -               unsigned long end = PHYS_PFN(range_end);
> > > > > >> +               unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > > > > >>
> > > > > >
> > > > > > Apologies, this should be
> > > > > >
> > > > > > -               unsigned long start = PHYS_PFN(range_start);
> > > > > > +               unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > > > > >                 unsigned long end = PHYS_PFN(range_end);
> > > > > >
> > > > > >
> > > > > > Strangely enough, the wrong version above also fixed the issue I was
> > > > > > seeing, but it is start that needs rounding up, not end.
> > > > >
> > > > > Is there a particular commit that you identified which could be used as
> > > > >  Fixes: tag to ease the back porting of such a change?
> > > >
> > > > Ah hold on. This appears to be a very recent regression, in
> > > > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> > > >
> > > > The old code was
> > > >
> > > > unsigned long start = memblock_region_memory_base_pfn(mem);
> > > >
> > > > which uses PFN_UP() to round up, whereas the new code rounds down.
> > > >
> > > > Looks like this is broken on a lot of platforms.
> > > >
> > > > Mike?
> > >
> > > I've reviewed again the whole series and it seems that only highmem
> > > initialization on arm and xtensa (that copied this code from arm) have
> > > this problem. I might have missed something again, though.
> > >
> > > So, to restore the original behaviour I think the fix should be
> > >
> > >         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > >                                 &range_start, &range_end, NULL) {
> > >                 unsigned long start = PHYS_UP(range_start);
> > >                 unsigned long end = PHYS_DOWN(range_end);
> > >
> > >
> >
> > PHYS_UP and PHYS_DOWN don't exist.
> >
> > Could you please send a patch that fixes this everywhere where it's broken?
>
> Argh, this should have been PFN_{UP,DOWN}.
> With the patch below qemu-system-arm boots for me. Does it fix your
> setup as well?
>

Yeah that looks fine.

> I kept your authorship as you did the heavy lifting here :)
>
> With acks from ARM and xtensa maintainers I can take it via memblock
> tree.
>

That would be up to Russell (on cc) and whoever the xtensa maintainers
are, who are not on CC.

Would you mind chasing them?


> From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001
> From: Ard Biesheuvel <ardb@kernel.org>
> Date: Fri, 30 Oct 2020 22:53:02 +0200
> Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned
>  memory reservations
>
> free_highpages() iterates over the free memblock regions in high
> memory, and marks each page as available for the memory management
> system.
>
> Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
> high memory pages") it rounded beginning of each region upwards and end of
> each region downwards.
>
> However, after that commit free_highmem() rounds the beginning and end of
> each region downwards, we and may end up freeing a page that is

we may end up

> memblock_reserve()d, resulting in memory corruption.
>
> Restore the original rounding of the region boundaries to avoid freeing
> reserved pages.
>
> Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Co-developed-by:  Mike Rapoport <rppt@linux.ibm.com>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>  arch/arm/mm/init.c    | 4 ++--
>  arch/xtensa/mm/init.c | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index d57112a276f5..c23dbf8bebee 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -354,8 +354,8 @@ static void __init free_highpages(void)
>         /* set highmem page free */
>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>                                 &range_start, &range_end, NULL) {
> -               unsigned long start = PHYS_PFN(range_start);
> -               unsigned long end = PHYS_PFN(range_end);
> +               unsigned long start = PFN_UP(range_start);
> +               unsigned long end = PFN_DOWN(range_end);
>
>                 /* Ignore complete lowmem entries */
>                 if (end <= max_low)
> diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
> index c6fc83efee0c..8731b7ad9308 100644
> --- a/arch/xtensa/mm/init.c
> +++ b/arch/xtensa/mm/init.c
> @@ -89,8 +89,8 @@ static void __init free_highpages(void)
>         /* set highmem page free */
>         for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
>                                 &range_start, &range_end, NULL) {
> -               unsigned long start = PHYS_PFN(range_start);
> -               unsigned long end = PHYS_PFN(range_end);
> +               unsigned long start = PFN_UP(range_start);
> +               unsigned long end = PFN_DOWN(range_end);
>
>                 /* Ignore complete lowmem entries */
>                 if (end <= max_low)
> --
> 2.28.0
>
>
> --
> Sincerely yours,
> Mike.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-10-30 21:42 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-29 11:03 [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations Ard Biesheuvel
2020-10-29 11:14 ` Ard Biesheuvel
2020-10-30  2:25   ` Florian Fainelli
2020-10-30  9:29     ` Ard Biesheuvel
2020-10-30 15:18       ` Mike Rapoport
2020-10-30 15:22         ` Ard Biesheuvel
2020-10-30 21:31           ` Mike Rapoport
2020-10-30 21:41             ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).