* [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small
@ 2013-03-08 18:47 David Vrabel
2013-03-08 19:01 ` Yinghai Lu
0 siblings, 1 reply; 3+ messages in thread
From: David Vrabel @ 2013-03-08 18:47 UTC (permalink / raw)
To: linux-kernel
Cc: x86, H. Peter Anvin, Yinghai Lu, Ingo Molnar, Thomas Gleixner,
David Vrabel
In init_mem_mapping(), if the first chunk of memory that is mapped is
small, there will not be enough mapped pages to allocate page table
pages for the next (larger) chunk.
Estimate how many pages are used for the mappings so far and how many
are needed for a larger chunk, and only increase step_size if there
are enough free pages.
This fixes a boot failure on a system where the first chunk of memory
mapped only had 3 pages in it.
init_memory_mapping: [mem 0x00000000-0x000fffff]
init_memory_mapping: [mem 0x20d000000-0x20d002fff]
init_memory_mapping: [mem 0x20c000000-0x20cffffff]
Kernel panic - not syncing: alloc_low_page: can not alloc memory
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/mm/init.c | 21 +++++++++++++++------
1 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 4903a03..0cc7afb 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -389,6 +389,12 @@ static unsigned long __init init_range_memory_mapping(
return mapped_ram_size;
}
+/* Estimate of the number of pages needed to page 'size' bytes. */
+static unsigned long __init nr_pages_to_map(unsigned long size)
+{
+ return DIV_ROUND_UP(size, PMD_SIZE) + DIV_ROUND_UP(size, PUD_SIZE);
+}
+
/* (PUD_SHIFT-PMD_SHIFT)/2 */
#define STEP_SIZE_SHIFT 5
void __init init_mem_mapping(void)
@@ -397,7 +403,7 @@ void __init init_mem_mapping(void)
unsigned long step_size;
unsigned long addr;
unsigned long mapped_ram_size = 0;
- unsigned long new_mapped_ram_size;
+ unsigned long mapped_pages;
probe_page_size_mask();
@@ -427,14 +433,17 @@ void __init init_mem_mapping(void)
start = ISA_END_ADDRESS;
} else
start = ISA_END_ADDRESS;
- new_mapped_ram_size = init_range_memory_mapping(start,
- last_start);
+ mapped_ram_size += init_range_memory_mapping(start,
+ last_start);
+ mapped_pages = mapped_ram_size >> PAGE_SHIFT;
last_start = start;
min_pfn_mapped = last_start >> PAGE_SHIFT;
- /* only increase step_size after big range get mapped */
- if (new_mapped_ram_size > mapped_ram_size)
+
+ /* Only increase step_size if there is enough mapped
+ ram to map the larger block. */
+ if (nr_pages_to_map(step_size << STEP_SIZE_SHIFT)
+ < mapped_pages - nr_pages_to_map(mapped_ram_size))
step_size <<= STEP_SIZE_SHIFT;
- mapped_ram_size += new_mapped_ram_size;
}
if (real_end < end)
--
1.7.2.5
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small
2013-03-08 18:47 [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small David Vrabel
@ 2013-03-08 19:01 ` Yinghai Lu
2013-03-08 19:37 ` David Vrabel
0 siblings, 1 reply; 3+ messages in thread
From: Yinghai Lu @ 2013-03-08 19:01 UTC (permalink / raw)
To: David Vrabel
Cc: linux-kernel, x86, H. Peter Anvin, Ingo Molnar, Thomas Gleixner
On Fri, Mar 8, 2013 at 10:47 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> In init_mem_mapping(), if the first chunk of memory that is mapped is
> small, there will not be enough mapped pages to allocate page table
> pages for the next (larger) chunk.
>
> Estimate how many pages are used for the mappings so far and how many
> are needed for a larger chunk, and only increase step_size if there
> are enough free pages.
>
> This fixes a boot failure on a system where the first chunk of memory
> mapped only had 3 pages in it.
>
> init_memory_mapping: [mem 0x00000000-0x000fffff]
> init_memory_mapping: [mem 0x20d000000-0x20d002fff]
> init_memory_mapping: [mem 0x20c000000-0x20cffffff]
> Kernel panic - not syncing: alloc_low_page: can not alloc memory
Can you check current linus tree?
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=98e7a989979b185f49e86ddaed2ad6890299d9f0
should fix the problem with your system.
Thanks
Yinghai
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small
2013-03-08 19:01 ` Yinghai Lu
@ 2013-03-08 19:37 ` David Vrabel
0 siblings, 0 replies; 3+ messages in thread
From: David Vrabel @ 2013-03-08 19:37 UTC (permalink / raw)
To: Yinghai Lu
Cc: linux-kernel, x86, H. Peter Anvin, Ingo Molnar, Thomas Gleixner
On 08/03/13 19:01, Yinghai Lu wrote:
> On Fri, Mar 8, 2013 at 10:47 AM, David Vrabel <david.vrabel@citrix.com> wrote:
>> In init_mem_mapping(), if the first chunk of memory that is mapped is
>> small, there will not be enough mapped pages to allocate page table
>> pages for the next (larger) chunk.
>>
>> Estimate how many pages are used for the mappings so far and how many
>> are needed for a larger chunk, and only increase step_size if there
>> are enough free pages.
>>
>> This fixes a boot failure on a system where the first chunk of memory
>> mapped only had 3 pages in it.
>>
>> init_memory_mapping: [mem 0x00000000-0x000fffff]
>> init_memory_mapping: [mem 0x20d000000-0x20d002fff]
>> init_memory_mapping: [mem 0x20c000000-0x20cffffff]
>> Kernel panic - not syncing: alloc_low_page: can not alloc memory
>
> Can you check current linus tree?
>
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=98e7a989979b185f49e86ddaed2ad6890299d9f0
>
> should fix the problem with your system.
Yes, that fixes it thanks.
David
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-03-08 19:37 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-03-08 18:47 [PATCH] x86,mm: fix init_mem_mapping() when the first memory chunk is small David Vrabel
2013-03-08 19:01 ` Yinghai Lu
2013-03-08 19:37 ` David Vrabel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).