From: Mike Rapoport <rppt@linux.ibm.com>
To: Ard Biesheuvel <ardb@kernel.org>
Cc: "# 3.4.x" <stable@vger.kernel.org>,
Linus Walleij <linus.walleij@linaro.org>,
Florian Fainelli <f.fainelli@gmail.com>,
Russell King <linux@armlinux.org.uk>,
Linux ARM <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations
Date: Fri, 30 Oct 2020 23:31:59 +0200 [thread overview]
Message-ID: <20201030213159.GA14584@linux.ibm.com> (raw)
In-Reply-To: <CAMj1kXEOnSPvkjY7Wd3gpOj+JfpU6bNNtoZ68cEhTK8rin3dTw@mail.gmail.com>
On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote:
> On Fri, 30 Oct 2020 at 16:18, Mike Rapoport <rppt@linux.ibm.com> wrote:
> >
> > Hi Ard,
> >
> > > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote:
> > > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel <ardb@kernel.org> wrote:
> > > > >>
> > > > >> free_highpages() iterates over the free memblock regions in high
> > > > >> memory, and marks each page as available for the memory management
> > > > >> system. However, as it rounds the end of each region downwards, we
> > > > >> may end up freeing a page that is memblock_reserve()d, resulting
> > > > >> in memory corruption. So align the end of the range to the next
> > > > >> page instead.
> > > > >>
> > > > >> Cc: <stable@vger.kernel.org>
> > > > >> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > > >> ---
> > > > >> arch/arm/mm/init.c | 2 +-
> > > > >> 1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >>
> > > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> > > > >> index a391804c7ce3..d41781cb5496 100644
> > > > >> --- a/arch/arm/mm/init.c
> > > > >> +++ b/arch/arm/mm/init.c
> > > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void)
> > > > >> for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > > > >> &range_start, &range_end, NULL) {
> > > > >> unsigned long start = PHYS_PFN(range_start);
> > > > >> - unsigned long end = PHYS_PFN(range_end);
> > > > >> + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end));
> > > > >>
> > > > >
> > > > > Apologies, this should be
> > > > >
> > > > > - unsigned long start = PHYS_PFN(range_start);
> > > > > + unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start));
> > > > > unsigned long end = PHYS_PFN(range_end);
> > > > >
> > > > >
> > > > > Strangely enough, the wrong version above also fixed the issue I was
> > > > > seeing, but it is start that needs rounding up, not end.
> > > >
> > > > Is there a particular commit that you identified which could be used as
> > > > Fixes: tag to ease the back porting of such a change?
> > >
> > > Ah hold on. This appears to be a very recent regression, in
> > > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1.
> > >
> > > The old code was
> > >
> > > unsigned long start = memblock_region_memory_base_pfn(mem);
> > >
> > > which uses PFN_UP() to round up, whereas the new code rounds down.
> > >
> > > Looks like this is broken on a lot of platforms.
> > >
> > > Mike?
> >
> > I've reviewed again the whole series and it seems that only highmem
> > initialization on arm and xtensa (that copied this code from arm) have
> > this problem. I might have missed something again, though.
> >
> > So, to restore the original behaviour I think the fix should be
> >
> > for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
> > &range_start, &range_end, NULL) {
> > unsigned long start = PHYS_UP(range_start);
> > unsigned long end = PHYS_DOWN(range_end);
> >
> >
>
> PHYS_UP and PHYS_DOWN don't exist.
>
> Could you please send a patch that fixes this everywhere where it's broken?
Argh, this should have been PFN_{UP,DOWN}.
With the patch below qemu-system-arm boots for me. Does it fix your
setup as well?
I kept your authorship as you did the heavy lifting here :)
With acks from ARM and xtensa maintainers I can take it via memblock
tree.
From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001
From: Ard Biesheuvel <ardb@kernel.org>
Date: Fri, 30 Oct 2020 22:53:02 +0200
Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned
memory reservations
free_highpages() iterates over the free memblock regions in high
memory, and marks each page as available for the memory management
system.
Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of
high memory pages") it rounded beginning of each region upwards and end of
each region downwards.
However, after that commit free_highmem() rounds the beginning and end of
each region downwards, we and may end up freeing a page that is
memblock_reserve()d, resulting in memory corruption.
Restore the original rounding of the region boundaries to avoid freeing
reserved pages.
Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Co-developed-by: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
arch/arm/mm/init.c | 4 ++--
arch/xtensa/mm/init.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index d57112a276f5..c23dbf8bebee 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -354,8 +354,8 @@ static void __init free_highpages(void)
/* set highmem page free */
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
&range_start, &range_end, NULL) {
- unsigned long start = PHYS_PFN(range_start);
- unsigned long end = PHYS_PFN(range_end);
+ unsigned long start = PFN_UP(range_start);
+ unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */
if (end <= max_low)
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index c6fc83efee0c..8731b7ad9308 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -89,8 +89,8 @@ static void __init free_highpages(void)
/* set highmem page free */
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
&range_start, &range_end, NULL) {
- unsigned long start = PHYS_PFN(range_start);
- unsigned long end = PHYS_PFN(range_end);
+ unsigned long start = PFN_UP(range_start);
+ unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */
if (end <= max_low)
--
2.28.0
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2020-10-30 21:33 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-29 11:03 [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations Ard Biesheuvel
2020-10-29 11:14 ` Ard Biesheuvel
2020-10-30 2:25 ` Florian Fainelli
2020-10-30 9:29 ` Ard Biesheuvel
2020-10-30 15:18 ` Mike Rapoport
2020-10-30 15:22 ` Ard Biesheuvel
2020-10-30 21:31 ` Mike Rapoport [this message]
2020-10-30 21:41 ` Ard Biesheuvel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201030213159.GA14584@linux.ibm.com \
--to=rppt@linux.ibm.com \
--cc=ardb@kernel.org \
--cc=f.fainelli@gmail.com \
--cc=linus.walleij@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux@armlinux.org.uk \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).