All of lore.kernel.org
 help / color / mirror / Atom feed
* Patch "arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP" has been added to the 3.10-stable tree
@ 2015-07-17  0:38 gregkh
  0 siblings, 0 replies; only message in thread
From: gregkh @ 2015-07-17  0:38 UTC (permalink / raw)
  To: Dave.Martin, catalin.marinas, gregkh; +Cc: stable, stable-commits


This is a note to let you know that I've just added the patch titled

    arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP

to the 3.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arm64-mm-fix-freeing-of-the-wrong-memmap-entries-with-sparsemem_vmemmap.patch
and it can be found in the queue-3.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From b9bcc919931611498e856eae9bf66337330d04cc Mon Sep 17 00:00:00 2001
From: Dave P Martin <Dave.Martin@arm.com>
Date: Tue, 16 Jun 2015 17:38:47 +0100
Subject: arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP

From: Dave P Martin <Dave.Martin@arm.com>

commit b9bcc919931611498e856eae9bf66337330d04cc upstream.

The memmap freeing code in free_unused_memmap() computes the end of
each memblock by adding the memblock size onto the base.  However,
if SPARSEMEM is enabled then the value (start) used for the base
may already have been rounded downwards to work out which memmap
entries to free after the previous memblock.

This may cause memmap entries that are in use to get freed.

In general, you're not likely to hit this problem unless there
are at least 2 memblocks and one of them is not aligned to a
sparsemem section boundary.  Note that carve-outs can increase
the number of memblocks by splitting the regions listed in the
device tree.

This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
vmemmap code deals with freeing the unused regions of the memmap
instead of requiring the arch code to do it.

This patch gets the memblock base out of the memblock directly when
computing the block end address to ensure the correct value is used.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/arm64/mm/init.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -262,7 +262,7 @@ static void __init free_unused_memmap(vo
 		 * memmap entries are valid from the bank end aligned to
 		 * MAX_ORDER_NR_PAGES.
 		 */
-		prev_end = ALIGN(start + __phys_to_pfn(reg->size),
+		prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
 				 MAX_ORDER_NR_PAGES);
 	}
 


Patches currently in stable-queue which might be from Dave.Martin@arm.com are

queue-3.10/arm64-mm-fix-freeing-of-the-wrong-memmap-entries-with-sparsemem_vmemmap.patch

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2015-07-17  0:38 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-17  0:38 Patch "arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP" has been added to the 3.10-stable tree gregkh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.