linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup
@ 2012-09-02  7:46 Yinghai Lu
  2012-09-02  7:46 ` [PATCH -v2 01/13] x86, mm: Add global page_size_mask Yinghai Lu
                   ` (12 more replies)
  0 siblings, 13 replies; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

Only create mapping for E820_820 and E820_RESERVED_KERN.

Seperate calculate_table_space_size and find_early_page_table out with
init_memory_mapping.

Also for 64bit, first 1M's page table will be in BRK.

For other range, will allocate page table one time, but init mapping
only for E820 RAM and E820_RESERVED_KERN.

Could be found at:
git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-mm

Thanks
	Yinghai

Jacob Shin (4):
  x86: if kernel .text .data .bss are not marked as E820_RAM, complain
    and fix
  x86: Fixup code testing if a pfn is direct mapped
  x86: Only direct map addresses that are marked as E820_RAM
  x86/mm: calculate_table_space_size based on memory ranges that are
    being mapped

Yinghai Lu (9):
  x86, mm: Add global page_size_mask
  x86, mm: Split out split_mem_range
  x86, mm: Moving init_memory_mapping calling
  x86, mm: Revert back good_end setting for 64bit
  x86, mm: Find early page table only one time
  x86, mm: Separate out calculate_table_space_size()
  x86, mm: Move down two calculate_table_space_size down.
  x86, mm: Use func pointer to table size calculation and mapping.
  x86, 64bit: Map first 1M ram early before memblock_x86_fill()

 arch/x86/include/asm/init.h       |    1 -
 arch/x86/include/asm/page_types.h |    3 +
 arch/x86/include/asm/pgtable.h    |    2 +
 arch/x86/kernel/cpu/amd.c         |    8 +-
 arch/x86/kernel/head64.c          |    2 +-
 arch/x86/kernel/setup.c           |   36 ++--
 arch/x86/mm/init.c                |  381 ++++++++++++++++++++++++++-----------
 arch/x86/mm/init_64.c             |   13 +-
 arch/x86/platform/efi/efi.c       |    8 +-
 9 files changed, 315 insertions(+), 139 deletions(-)

-- 
1.7.7


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH -v2 01/13] x86, mm: Add global page_size_mask
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:23   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 02/13] x86, mm: Split out split_mem_range Yinghai Lu
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

detect if need to use 1G or 2M and store them in page_size_mask.

Only probe them one time.

Suggested-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/include/asm/pgtable.h |    1 +
 arch/x86/kernel/setup.c        |    1 +
 arch/x86/mm/init.c             |   66 +++++++++++++++++++---------------------
 3 files changed, 33 insertions(+), 35 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 49afb3f..e47e4db 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -597,6 +597,7 @@ static inline int pgd_none(pgd_t pgd)
 #ifndef __ASSEMBLY__
 
 extern int direct_gbpages;
+void probe_page_size_mask(void);
 
 /* local pte updates need not use xchg for locking */
 static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f4b9b80..d6e8c03 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -912,6 +912,7 @@ void __init setup_arch(char **cmdline_p)
 	setup_real_mode();
 
 	init_gbpages();
+	probe_page_size_mask();
 
 	/* max_pfn_mapped is updated here */
 	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e0e6990..838e9bc 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -35,8 +35,10 @@ struct map_range {
 	unsigned page_size_mask;
 };
 
-static void __init find_early_table_space(struct map_range *mr, unsigned long end,
-					  int use_pse, int use_gbpages)
+static int page_size_mask;
+
+static void __init find_early_table_space(struct map_range *mr,
+					  unsigned long end)
 {
 	unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
 	phys_addr_t base;
@@ -44,7 +46,7 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
 	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
 
-	if (use_gbpages) {
+	if (page_size_mask & (1 << PG_LEVEL_1G)) {
 		unsigned long extra;
 
 		extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
@@ -54,7 +56,7 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 
 	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
 
-	if (use_pse) {
+	if (page_size_mask & (1 << PG_LEVEL_2M)) {
 		unsigned long extra;
 
 		extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
@@ -90,6 +92,30 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
+void probe_page_size_mask(void)
+{
+#if !defined(CONFIG_DEBUG_PAGEALLOC) && !defined(CONFIG_KMEMCHECK)
+	/*
+	 * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
+	 * This will simplify cpa(), which otherwise needs to support splitting
+	 * large pages into small in interrupt context, etc.
+	 */
+	if (direct_gbpages)
+		page_size_mask |= 1 << PG_LEVEL_1G;
+	if (cpu_has_pse)
+		page_size_mask |= 1 << PG_LEVEL_2M;
+#endif
+
+	/* Enable PSE if available */
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	/* Enable PGE if available */
+	if (cpu_has_pge) {
+		set_in_cr4(X86_CR4_PGE);
+		__supported_pte_mask |= _PAGE_GLOBAL;
+	}
+}
 void __init native_pagetable_reserve(u64 start, u64 end)
 {
 	memblock_reserve(start, end - start);
@@ -125,45 +151,15 @@ static int __meminit save_mr(struct map_range *mr, int nr_range,
 unsigned long __init_refok init_memory_mapping(unsigned long start,
 					       unsigned long end)
 {
-	unsigned long page_size_mask = 0;
 	unsigned long start_pfn, end_pfn;
 	unsigned long ret = 0;
 	unsigned long pos;
-
 	struct map_range mr[NR_RANGE_MR];
 	int nr_range, i;
-	int use_pse, use_gbpages;
 
 	printk(KERN_INFO "init_memory_mapping: [mem %#010lx-%#010lx]\n",
 	       start, end - 1);
 
-#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KMEMCHECK)
-	/*
-	 * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
-	 * This will simplify cpa(), which otherwise needs to support splitting
-	 * large pages into small in interrupt context, etc.
-	 */
-	use_pse = use_gbpages = 0;
-#else
-	use_pse = cpu_has_pse;
-	use_gbpages = direct_gbpages;
-#endif
-
-	/* Enable PSE if available */
-	if (cpu_has_pse)
-		set_in_cr4(X86_CR4_PSE);
-
-	/* Enable PGE if available */
-	if (cpu_has_pge) {
-		set_in_cr4(X86_CR4_PGE);
-		__supported_pte_mask |= _PAGE_GLOBAL;
-	}
-
-	if (use_gbpages)
-		page_size_mask |= 1 << PG_LEVEL_1G;
-	if (use_pse)
-		page_size_mask |= 1 << PG_LEVEL_2M;
-
 	memset(mr, 0, sizeof(mr));
 	nr_range = 0;
 
@@ -267,7 +263,7 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 	 * nodes are discovered.
 	 */
 	if (!after_bootmem)
-		find_early_table_space(&mr[0], end, use_pse, use_gbpages);
+		find_early_table_space(&mr[0], end);
 
 	for (i = 0; i < nr_range; i++)
 		ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 02/13] x86, mm: Split out split_mem_range
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
  2012-09-02  7:46 ` [PATCH -v2 01/13] x86, mm: Add global page_size_mask Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:23   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling Yinghai Lu
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

from init_memory_mapping, so make init_memory_mapping readable.

Suggested-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |   42 ++++++++++++++++++++++++++----------------
 1 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 838e9bc..7d05e28 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -143,25 +143,13 @@ static int __meminit save_mr(struct map_range *mr, int nr_range,
 	return nr_range;
 }
 
-/*
- * Setup the direct mapping of the physical memory at PAGE_OFFSET.
- * This runs before bootmem is initialized and gets pages directly from
- * the physical memory. To access them they are temporarily mapped.
- */
-unsigned long __init_refok init_memory_mapping(unsigned long start,
-					       unsigned long end)
+static int __meminit split_mem_range(struct map_range *mr, int nr_range,
+				     unsigned long start,
+				     unsigned long end)
 {
 	unsigned long start_pfn, end_pfn;
-	unsigned long ret = 0;
 	unsigned long pos;
-	struct map_range mr[NR_RANGE_MR];
-	int nr_range, i;
-
-	printk(KERN_INFO "init_memory_mapping: [mem %#010lx-%#010lx]\n",
-	       start, end - 1);
-
-	memset(mr, 0, sizeof(mr));
-	nr_range = 0;
+	int i;
 
 	/* head if not big page alignment ? */
 	start_pfn = start >> PAGE_SHIFT;
@@ -255,6 +243,28 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 			(mr[i].page_size_mask & (1<<PG_LEVEL_1G))?"1G":(
 			 (mr[i].page_size_mask & (1<<PG_LEVEL_2M))?"2M":"4k"));
 
+	return nr_range;
+}
+
+/*
+ * Setup the direct mapping of the physical memory at PAGE_OFFSET.
+ * This runs before bootmem is initialized and gets pages directly from
+ * the physical memory. To access them they are temporarily mapped.
+ */
+unsigned long __init_refok init_memory_mapping(unsigned long start,
+					       unsigned long end)
+{
+	struct map_range mr[NR_RANGE_MR];
+	unsigned long ret = 0;
+	int nr_range, i;
+
+	pr_info("init_memory_mapping: [mem %#010lx-%#010lx]\n",
+	       start, end - 1);
+
+	memset(mr, 0, sizeof(mr));
+	nr_range = 0;
+	nr_range = split_mem_range(mr, nr_range, start, end);
+
 	/*
 	 * Find space for the kernel direct mapping tables.
 	 *
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
  2012-09-02  7:46 ` [PATCH -v2 01/13] x86, mm: Add global page_size_mask Yinghai Lu
  2012-09-02  7:46 ` [PATCH -v2 02/13] x86, mm: Split out split_mem_range Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:24   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

from setup.c to mm/init.c

So could update all related calling together later

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/include/asm/init.h    |    1 -
 arch/x86/include/asm/pgtable.h |    2 +-
 arch/x86/kernel/setup.c        |   13 +------------
 arch/x86/mm/init.c             |   19 ++++++++++++++++++-
 4 files changed, 20 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index adcc0ae..4f13998 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -12,7 +12,6 @@ kernel_physical_mapping_init(unsigned long start,
 			     unsigned long end,
 			     unsigned long page_size_mask);
 
-
 extern unsigned long __initdata pgt_buf_start;
 extern unsigned long __meminitdata pgt_buf_end;
 extern unsigned long __meminitdata pgt_buf_top;
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index e47e4db..ae2cabb 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -597,7 +597,7 @@ static inline int pgd_none(pgd_t pgd)
 #ifndef __ASSEMBLY__
 
 extern int direct_gbpages;
-void probe_page_size_mask(void);
+void init_mem_mapping(void);
 
 /* local pte updates need not use xchg for locking */
 static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d6e8c03..c30c78c 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -912,20 +912,9 @@ void __init setup_arch(char **cmdline_p)
 	setup_real_mode();
 
 	init_gbpages();
-	probe_page_size_mask();
 
-	/* max_pfn_mapped is updated here */
-	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
-	max_pfn_mapped = max_low_pfn_mapped;
+	init_mem_mapping();
 
-#ifdef CONFIG_X86_64
-	if (max_pfn > max_low_pfn) {
-		max_pfn_mapped = init_memory_mapping(1UL<<32,
-						     max_pfn<<PAGE_SHIFT);
-		/* can we preseve max_low_pfn ?*/
-		max_low_pfn = max_pfn;
-	}
-#endif
 	memblock.current_limit = get_max_mapped();
 	dma_contiguous_reserve(0);
 
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7d05e28..15a6a38 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -92,7 +92,7 @@ static void __init find_early_table_space(struct map_range *mr,
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
-void probe_page_size_mask(void)
+static void __init probe_page_size_mask(void)
 {
 #if !defined(CONFIG_DEBUG_PAGEALLOC) && !defined(CONFIG_KMEMCHECK)
 	/*
@@ -312,6 +312,23 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 	return ret >> PAGE_SHIFT;
 }
 
+void __init init_mem_mapping(void)
+{
+	probe_page_size_mask();
+
+	/* max_pfn_mapped is updated here */
+	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
+	max_pfn_mapped = max_low_pfn_mapped;
+
+#ifdef CONFIG_X86_64
+	if (max_pfn > max_low_pfn) {
+		max_pfn_mapped = init_memory_mapping(1UL<<32,
+						     max_pfn<<PAGE_SHIFT);
+		/* can we preseve max_low_pfn ?*/
+		max_low_pfn = max_pfn;
+	}
+#endif
+}
 
 /*
  * devmem_is_allowed() checks to see if /dev/mem access to a certain address
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (2 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:25   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 05/13] x86, mm: Find early page table only one time Yinghai Lu
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

So we could put page table high again for 64bit.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 15a6a38..cca9b7d 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -76,8 +76,8 @@ static void __init find_early_table_space(struct map_range *mr,
 #ifdef CONFIG_X86_32
 	/* for fixmap */
 	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
-#endif
 	good_end = max_pfn_mapped << PAGE_SHIFT;
+#endif
 
 	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
 	if (!base)
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 05/13] x86, mm: Find early page table only one time
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (3 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:27   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

Should not do that in every calling of init_memory_mapping.
Actually in early time, only need do once.

Also move down early_memtest.

-v2: fix one early_memtest with 32bit by passing max_pfn_mapped instead.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |   72 ++++++++++++++++++++++++++-------------------------
 1 files changed, 37 insertions(+), 35 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index cca9b7d..0ada295 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -37,7 +37,7 @@ struct map_range {
 
 static int page_size_mask;
 
-static void __init find_early_table_space(struct map_range *mr,
+static void __init find_early_table_space(unsigned long begin,
 					  unsigned long end)
 {
 	unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
@@ -64,8 +64,8 @@ static void __init find_early_table_space(struct map_range *mr,
 		extra += PMD_SIZE;
 #endif
 		/* The first 2/4M doesn't use large pages. */
-		if (mr->start < PMD_SIZE)
-			extra += mr->end - mr->start;
+		if (begin < PMD_SIZE)
+			extra += (PMD_SIZE - begin) >> PAGE_SHIFT;
 
 		ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	} else
@@ -265,16 +265,6 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 	nr_range = 0;
 	nr_range = split_mem_range(mr, nr_range, start, end);
 
-	/*
-	 * Find space for the kernel direct mapping tables.
-	 *
-	 * Later we should allocate these tables in the local node of the
-	 * memory mapped. Unfortunately this is done currently before the
-	 * nodes are discovered.
-	 */
-	if (!after_bootmem)
-		find_early_table_space(&mr[0], end);
-
 	for (i = 0; i < nr_range; i++)
 		ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
 						   mr[i].page_size_mask);
@@ -287,6 +277,36 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 	__flush_tlb_all();
 
+	return ret >> PAGE_SHIFT;
+}
+
+void __init init_mem_mapping(void)
+{
+	probe_page_size_mask();
+
+	/*
+	 * Find space for the kernel direct mapping tables.
+	 *
+	 * Later we should allocate these tables in the local node of the
+	 * memory mapped. Unfortunately this is done currently before the
+	 * nodes are discovered.
+	 */
+#ifdef CONFIG_X86_64
+	find_early_table_space(0, max_pfn<<PAGE_SHIFT);
+#else
+	find_early_table_space(0, max_low_pfn<<PAGE_SHIFT);
+#endif
+	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
+	max_pfn_mapped = max_low_pfn_mapped;
+
+#ifdef CONFIG_X86_64
+	if (max_pfn > max_low_pfn) {
+		max_pfn_mapped = init_memory_mapping(1UL<<32,
+						     max_pfn<<PAGE_SHIFT);
+		/* can we preseve max_low_pfn ?*/
+		max_low_pfn = max_pfn;
+	}
+#endif
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
 	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
@@ -302,32 +322,14 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 	 * RO all the pagetable pages, including the ones that are beyond
 	 * pgt_buf_end at that time.
 	 */
-	if (!after_bootmem && pgt_buf_end > pgt_buf_start)
+	if (pgt_buf_end > pgt_buf_start)
 		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
 				PFN_PHYS(pgt_buf_end));
 
-	if (!after_bootmem)
-		early_memtest(start, end);
+	/* stop the wrong using */
+	pgt_buf_top = 0;
 
-	return ret >> PAGE_SHIFT;
-}
-
-void __init init_mem_mapping(void)
-{
-	probe_page_size_mask();
-
-	/* max_pfn_mapped is updated here */
-	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
-	max_pfn_mapped = max_low_pfn_mapped;
-
-#ifdef CONFIG_X86_64
-	if (max_pfn > max_low_pfn) {
-		max_pfn_mapped = init_memory_mapping(1UL<<32,
-						     max_pfn<<PAGE_SHIFT);
-		/* can we preseve max_low_pfn ?*/
-		max_low_pfn = max_pfn;
-	}
-#endif
+	early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
 }
 
 /*
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size()
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (4 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 05/13] x86, mm: Find early page table only one time Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:28   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down Yinghai Lu
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

it should take physical address range that will need to be mapped.
and find_early_table_space should take range that pgt buff should be in.
Separate those two to reduce confusion.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |   39 ++++++++++++++++++++++++++++-----------
 1 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 0ada295..fcb44c5 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -37,11 +37,10 @@ struct map_range {
 
 static int page_size_mask;
 
-static void __init find_early_table_space(unsigned long begin,
+static unsigned long __init calculate_table_space_size(unsigned long begin,
 					  unsigned long end)
 {
-	unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
-	phys_addr_t base;
+	unsigned long puds, pmds, ptes, tables;
 
 	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
 	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
@@ -76,9 +75,17 @@ static void __init find_early_table_space(unsigned long begin,
 #ifdef CONFIG_X86_32
 	/* for fixmap */
 	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
-	good_end = max_pfn_mapped << PAGE_SHIFT;
 #endif
 
+	return tables;
+}
+
+static void __init find_early_table_space(unsigned long start,
+					  unsigned long good_end,
+					  unsigned long tables)
+{
+	phys_addr_t base;
+
 	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
 	if (!base)
 		panic("Cannot find space for the kernel page tables");
@@ -86,10 +93,6 @@ static void __init find_early_table_space(unsigned long begin,
 	pgt_buf_start = base >> PAGE_SHIFT;
 	pgt_buf_end = pgt_buf_start;
 	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
-
-	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx]\n",
-		end - 1, pgt_buf_start << PAGE_SHIFT,
-		(pgt_buf_top << PAGE_SHIFT) - 1);
 }
 
 static void __init probe_page_size_mask(void)
@@ -282,6 +285,8 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 void __init init_mem_mapping(void)
 {
+	unsigned long tables, good_end, end;
+
 	probe_page_size_mask();
 
 	/*
@@ -292,10 +297,18 @@ void __init init_mem_mapping(void)
 	 * nodes are discovered.
 	 */
 #ifdef CONFIG_X86_64
-	find_early_table_space(0, max_pfn<<PAGE_SHIFT);
+	end = max_pfn << PAGE_SHIFT;
+	good_end = end;
 #else
-	find_early_table_space(0, max_low_pfn<<PAGE_SHIFT);
+	end = max_low_pfn << PAGE_SHIFT;
+	good_end = max_pfn_mapped << PAGE_SHIFT;
 #endif
+	tables = calculate_table_space_size(0, end);
+	find_early_table_space(0, good_end, tables);
+	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
+		end - 1, pgt_buf_start << PAGE_SHIFT,
+		(pgt_buf_top << PAGE_SHIFT) - 1);
+
 	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
 	max_pfn_mapped = max_low_pfn_mapped;
 
@@ -322,9 +335,13 @@ void __init init_mem_mapping(void)
 	 * RO all the pagetable pages, including the ones that are beyond
 	 * pgt_buf_end at that time.
 	 */
-	if (pgt_buf_end > pgt_buf_start)
+	if (pgt_buf_end > pgt_buf_start) {
+		printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] final\n",
+			end - 1, pgt_buf_start << PAGE_SHIFT,
+			(pgt_buf_end << PAGE_SHIFT) - 1);
 		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
 				PFN_PHYS(pgt_buf_end));
+	}
 
 	/* stop the wrong using */
 	pgt_buf_top = 0;
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down.
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (5 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:29   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

So later could make it call split_mem_range...

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |  116 ++++++++++++++++++++++++++--------------------------
 1 files changed, 58 insertions(+), 58 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index fcb44c5..a475d7f 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -37,64 +37,6 @@ struct map_range {
 
 static int page_size_mask;
 
-static unsigned long __init calculate_table_space_size(unsigned long begin,
-					  unsigned long end)
-{
-	unsigned long puds, pmds, ptes, tables;
-
-	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
-	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
-
-	if (page_size_mask & (1 << PG_LEVEL_1G)) {
-		unsigned long extra;
-
-		extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
-		pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
-	} else
-		pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
-
-	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
-
-	if (page_size_mask & (1 << PG_LEVEL_2M)) {
-		unsigned long extra;
-
-		extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
-#ifdef CONFIG_X86_32
-		extra += PMD_SIZE;
-#endif
-		/* The first 2/4M doesn't use large pages. */
-		if (begin < PMD_SIZE)
-			extra += (PMD_SIZE - begin) >> PAGE_SHIFT;
-
-		ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	} else
-		ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
-
-	tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
-
-#ifdef CONFIG_X86_32
-	/* for fixmap */
-	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
-#endif
-
-	return tables;
-}
-
-static void __init find_early_table_space(unsigned long start,
-					  unsigned long good_end,
-					  unsigned long tables)
-{
-	phys_addr_t base;
-
-	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
-	if (!base)
-		panic("Cannot find space for the kernel page tables");
-
-	pgt_buf_start = base >> PAGE_SHIFT;
-	pgt_buf_end = pgt_buf_start;
-	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
-}
-
 static void __init probe_page_size_mask(void)
 {
 #if !defined(CONFIG_DEBUG_PAGEALLOC) && !defined(CONFIG_KMEMCHECK)
@@ -249,6 +191,64 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
 	return nr_range;
 }
 
+static unsigned long __init calculate_table_space_size(unsigned long begin,
+					  unsigned long end)
+{
+	unsigned long puds, pmds, ptes, tables;
+
+	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
+	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+
+	if (page_size_mask & (1 << PG_LEVEL_1G)) {
+		unsigned long extra;
+
+		extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
+		pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
+	} else
+		pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
+
+	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
+
+	if (page_size_mask & (1 << PG_LEVEL_2M)) {
+		unsigned long extra;
+
+		extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
+#ifdef CONFIG_X86_32
+		extra += PMD_SIZE;
+#endif
+		/* The first 2/4M doesn't use large pages. */
+		if (begin < PMD_SIZE)
+			extra += (PMD_SIZE - begin) >> PAGE_SHIFT;
+
+		ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	} else
+		ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
+
+	tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
+
+#ifdef CONFIG_X86_32
+	/* for fixmap */
+	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
+#endif
+
+	return tables;
+}
+
+static void __init find_early_table_space(unsigned long start,
+					  unsigned long good_end,
+					  unsigned long tables)
+{
+	phys_addr_t base;
+
+	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
+	if (!base)
+		panic("Cannot find space for the kernel page tables");
+
+	pgt_buf_start = base >> PAGE_SHIFT;
+	pgt_buf_end = pgt_buf_start;
+	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (6 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:31   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped Yinghai Lu
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel

From: Jacob Shin <jacob.shin@amd.com>

There could be cases where user supplied memmap=exactmap memory
mappings do not mark the region where the kernel .text .data and
.bss reside as E820_RAM, as reported here:

https://lkml.org/lkml/2012/8/14/86

Handle it by complaining, and adding the range back into the e820.

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
---
 arch/x86/kernel/setup.c |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index c30c78c..587dcd9 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -831,6 +831,20 @@ void __init setup_arch(char **cmdline_p)
 	insert_resource(&iomem_resource, &data_resource);
 	insert_resource(&iomem_resource, &bss_resource);
 
+	/*
+	 * Complain if .text .data and .bss are not marked as E820_RAM and
+	 * attempt to fix it by adding the range. We may have a confused BIOS,
+	 * or the user may have incorrectly supplied it via memmap=exactmap. If
+	 * we really are running on top non-RAM, we will crash later anyways.
+	 */
+	if (!e820_all_mapped(code_resource.start, __pa(__brk_limit), E820_RAM)) {
+		pr_warn(".text .data .bss are not marked as E820_RAM!\n");
+
+		e820_add_region(code_resource.start,
+				__pa(__brk_limit) - code_resource.start + 1,
+				E820_RAM);
+	}
+
 	trim_bios_range();
 #ifdef CONFIG_X86_32
 	if (ppro_with_ram_bug()) {
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (7 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:32   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM Yinghai Lu
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel

From: Jacob Shin <jacob.shin@amd.com>

Update code that previously assumed pfns [ 0 - max_low_pfn_mapped ) and
[ 4GB - max_pfn_mapped ) were always direct mapped, to now look up
pfn_mapped ranges instead.


-v2: change applying sequence to keep git bisecting working.
     so add dummy pfn_range_is_mapped(). - Yinghai Lu

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
---
 arch/x86/include/asm/page_types.h |    8 ++++++++
 arch/x86/kernel/cpu/amd.c         |    8 +++-----
 arch/x86/platform/efi/efi.c       |    8 ++++----
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index e21fdd1..45aae6e 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -51,6 +51,14 @@ static inline phys_addr_t get_max_mapped(void)
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
 }
 
+static inline bool pfn_range_is_mapped(unsigned long start_pfn,
+					unsigned long end_pfn)
+{
+	return end_pfn <= max_low_pfn_mapped ||
+	       (end_pfn > (1UL << (32 - PAGE_SHIFT)) &&
+		end_pfn <= max_pfn_mapped);
+}
+
 extern unsigned long init_memory_mapping(unsigned long start,
 					 unsigned long end);
 
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 9d92e19..4235553 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -676,12 +676,10 @@ static void __cpuinit init_amd(struct cpuinfo_x86 *c)
 		 * benefit in doing so.
 		 */
 		if (!rdmsrl_safe(MSR_K8_TSEG_ADDR, &tseg)) {
+			unsigned long pfn = tseg >> PAGE_SHIFT;
+
 			printk(KERN_DEBUG "tseg: %010llx\n", tseg);
-			if ((tseg>>PMD_SHIFT) <
-				(max_low_pfn_mapped>>(PMD_SHIFT-PAGE_SHIFT)) ||
-				((tseg>>PMD_SHIFT) <
-				(max_pfn_mapped>>(PMD_SHIFT-PAGE_SHIFT)) &&
-				(tseg>>PMD_SHIFT) >= (1ULL<<(32 - PMD_SHIFT))))
+			if (pfn_range_is_mapped(pfn, pfn + 1))
 				set_memory_4k((unsigned long)__va(tseg), 1);
 		}
 	}
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 92660eda..f1facde 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -776,7 +776,7 @@ void __init efi_enter_virtual_mode(void)
 	efi_memory_desc_t *md, *prev_md = NULL;
 	efi_status_t status;
 	unsigned long size;
-	u64 end, systab, addr, npages, end_pfn;
+	u64 end, systab, addr, npages, start_pfn, end_pfn;
 	void *p, *va, *new_memmap = NULL;
 	int count = 0;
 
@@ -827,10 +827,10 @@ void __init efi_enter_virtual_mode(void)
 		size = md->num_pages << EFI_PAGE_SHIFT;
 		end = md->phys_addr + size;
 
+		start_pfn = PFN_DOWN(md->phys_addr);
 		end_pfn = PFN_UP(end);
-		if (end_pfn <= max_low_pfn_mapped
-		    || (end_pfn > (1UL << (32 - PAGE_SHIFT))
-			&& end_pfn <= max_pfn_mapped))
+
+		if (pfn_range_is_mapped(start_pfn, end_pfn))
 			va = __va(md->phys_addr);
 		else
 			va = efi_ioremap(md->phys_addr, size, md->type);
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (8 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:34   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel

From: Jacob Shin <jacob.shin@amd.com>

Currently direct mappings are created for [ 0 to max_low_pfn<<PAGE_SHIFT )
and [ 4GB to max_pfn<<PAGE_SHIFT ), which may include regions that are not
backed by actual DRAM. This is fine for holes under 4GB which are covered
by fixed and variable range MTRRs to be UC. However, we run into trouble
on higher memory addresses which cannot be covered by MTRRs.

Our system with 1TB of RAM has an e820 that looks like this:

 BIOS-e820: [mem 0x0000000000000000-0x00000000000983ff] usable
 BIOS-e820: [mem 0x0000000000098400-0x000000000009ffff] reserved
 BIOS-e820: [mem 0x00000000000d0000-0x00000000000fffff] reserved
 BIOS-e820: [mem 0x0000000000100000-0x00000000c7ebffff] usable
 BIOS-e820: [mem 0x00000000c7ec0000-0x00000000c7ed7fff] ACPI data
 BIOS-e820: [mem 0x00000000c7ed8000-0x00000000c7ed9fff] ACPI NVS
 BIOS-e820: [mem 0x00000000c7eda000-0x00000000c7ffffff] reserved
 BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
 BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
 BIOS-e820: [mem 0x00000000fff00000-0x00000000ffffffff] reserved
 BIOS-e820: [mem 0x0000000100000000-0x000000e037ffffff] usable
 BIOS-e820: [mem 0x000000e038000000-0x000000fcffffffff] reserved
 BIOS-e820: [mem 0x0000010000000000-0x0000011ffeffffff] usable

and so direct mappings are created for huge memory hole between
0x000000e038000000 to 0x0000010000000000. Even though the kernel never
generates memory accesses in that region, since the page tables mark
them incorrectly as being WB, our (AMD) processor ends up causing a MCE
while doing some memory bookkeeping/optimizations around that area.

This patch iterates through e820 and only direct maps ranges that are
marked as E820_RAM, and keeps track of those pfn ranges. Depending on
the alignment of E820 ranges, this may possibly result in using smaller
size (i.e. 4K instead of 2M or 1G) page tables.

-v2: move changes from setup.c to mm/init.c, also use for_each_mem_pfn_range
	instead.  - Yinghai Lu
-v3: add calculate_all_table_space_size() to get correct needed page table
	size. - Yinghai Lu

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
---
 arch/x86/include/asm/page_types.h |    8 +--
 arch/x86/kernel/setup.c           |    8 ++-
 arch/x86/mm/init.c                |  119 +++++++++++++++++++++++++++++++++----
 arch/x86/mm/init_64.c             |    6 +-
 4 files changed, 116 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index 45aae6e..54c9787 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -51,13 +51,7 @@ static inline phys_addr_t get_max_mapped(void)
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
 }
 
-static inline bool pfn_range_is_mapped(unsigned long start_pfn,
-					unsigned long end_pfn)
-{
-	return end_pfn <= max_low_pfn_mapped ||
-	       (end_pfn > (1UL << (32 - PAGE_SHIFT)) &&
-		end_pfn <= max_pfn_mapped);
-}
+bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn);
 
 extern unsigned long init_memory_mapping(unsigned long start,
 					 unsigned long end);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 587dcd9..2eb91b7 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -115,9 +115,11 @@
 #include <asm/prom.h>
 
 /*
- * end_pfn only includes RAM, while max_pfn_mapped includes all e820 entries.
- * The direct mapping extends to max_pfn_mapped, so that we can directly access
- * apertures, ACPI and other tables without having to play with fixmaps.
+ * max_low_pfn_mapped: highest direct mapped pfn under 4GB
+ * max_pfn_mapped:     highest direct mapped pfn over 4GB
+ *
+ * The direct mapping only covers E820_RAM regions, so the ranges and gaps are
+ * represented by pfn_mapped
  */
 unsigned long max_low_pfn_mapped;
 unsigned long max_pfn_mapped;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index a475d7f..47b6e41 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -234,6 +234,38 @@ static unsigned long __init calculate_table_space_size(unsigned long begin,
 	return tables;
 }
 
+static unsigned long __init calculate_all_table_space_size(void)
+{
+	unsigned long start_pfn, end_pfn;
+	unsigned long tables;
+	int i;
+
+	/* the ISA range is always mapped regardless of memory holes */
+	tables = calculate_table_space_size(0, ISA_END_ADDRESS);
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		u64 start = start_pfn << PAGE_SHIFT;
+		u64 end = end_pfn << PAGE_SHIFT;
+
+		if (end <= ISA_END_ADDRESS)
+			continue;
+
+		if (start < ISA_END_ADDRESS)
+			start = ISA_END_ADDRESS;
+#ifdef CONFIG_X86_32
+		/* on 32 bit, we only map up to max_low_pfn */
+		if ((start >> PAGE_SHIFT) >= max_low_pfn)
+			continue;
+
+		if ((end >> PAGE_SHIFT) > max_low_pfn)
+			end = max_low_pfn << PAGE_SHIFT;
+#endif
+		tables += calculate_table_space_size(start, end);
+	}
+
+	return tables;
+}
+
 static void __init find_early_table_space(unsigned long start,
 					  unsigned long good_end,
 					  unsigned long tables)
@@ -249,6 +281,33 @@ static void __init find_early_table_space(unsigned long start,
 	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
 }
 
+static struct range pfn_mapped[E820_X_MAX];
+static int nr_pfn_mapped;
+
+static void add_pfn_range_mapped(unsigned long start_pfn, unsigned long end_pfn)
+{
+	nr_pfn_mapped = add_range_with_merge(pfn_mapped, E820_X_MAX,
+					     nr_pfn_mapped, start_pfn, end_pfn);
+	nr_pfn_mapped = clean_sort_range(pfn_mapped, E820_X_MAX);
+
+	max_pfn_mapped = max(max_pfn_mapped, end_pfn);
+
+	if (end_pfn <= (1UL << (32 - PAGE_SHIFT)))
+		max_low_pfn_mapped = max(max_low_pfn_mapped, end_pfn);
+}
+
+bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn)
+{
+	int i;
+
+	for (i = 0; i < nr_pfn_mapped; i++)
+		if ((start_pfn >= pfn_mapped[i].start) &&
+		    (end_pfn <= pfn_mapped[i].end))
+			return true;
+
+	return false;
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
@@ -280,9 +339,55 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 	__flush_tlb_all();
 
+	add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
+
 	return ret >> PAGE_SHIFT;
 }
 
+/*
+ * Iterate through E820 memory map and create direct mappings for only E820_RAM
+ * regions. We cannot simply create direct mappings for all pfns from
+ * [0 to max_low_pfn) and [4GB to max_pfn) because of possible memory holes in
+ * high addresses that cannot be marked as UC by fixed/variable range MTRRs.
+ * Depending on the alignment of E820 ranges, this may possibly result in using
+ * smaller size (i.e. 4K instead of 2M or 1G) page tables.
+ */
+static void __init init_all_memory_mapping(void)
+{
+	unsigned long start_pfn, end_pfn;
+	int i;
+
+	/* the ISA range is always mapped regardless of memory holes */
+	init_memory_mapping(0, ISA_END_ADDRESS);
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		u64 start = start_pfn << PAGE_SHIFT;
+		u64 end = end_pfn << PAGE_SHIFT;
+
+		if (end <= ISA_END_ADDRESS)
+			continue;
+
+		if (start < ISA_END_ADDRESS)
+			start = ISA_END_ADDRESS;
+#ifdef CONFIG_X86_32
+		/* on 32 bit, we only map up to max_low_pfn */
+		if ((start >> PAGE_SHIFT) >= max_low_pfn)
+			continue;
+
+		if ((end >> PAGE_SHIFT) > max_low_pfn)
+			end = max_low_pfn << PAGE_SHIFT;
+#endif
+		init_memory_mapping(start, end);
+	}
+
+#ifdef CONFIG_X86_64
+	if (max_pfn > max_low_pfn) {
+		/* can we preseve max_low_pfn ?*/
+		max_low_pfn = max_pfn;
+	}
+#endif
+}
+
 void __init init_mem_mapping(void)
 {
 	unsigned long tables, good_end, end;
@@ -303,23 +408,15 @@ void __init init_mem_mapping(void)
 	end = max_low_pfn << PAGE_SHIFT;
 	good_end = max_pfn_mapped << PAGE_SHIFT;
 #endif
-	tables = calculate_table_space_size(0, end);
+	tables = calculate_all_table_space_size();
 	find_early_table_space(0, good_end, tables);
 	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
 		end - 1, pgt_buf_start << PAGE_SHIFT,
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 
-	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
-	max_pfn_mapped = max_low_pfn_mapped;
+	max_pfn_mapped = 0; /* will get exact value next */
+	init_all_memory_mapping();
 
-#ifdef CONFIG_X86_64
-	if (max_pfn > max_low_pfn) {
-		max_pfn_mapped = init_memory_mapping(1UL<<32,
-						     max_pfn<<PAGE_SHIFT);
-		/* can we preseve max_low_pfn ?*/
-		max_low_pfn = max_pfn;
-	}
-#endif
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
 	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 2b6b4a3..ab558eb 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -657,13 +657,11 @@ int arch_add_memory(int nid, u64 start, u64 size)
 {
 	struct pglist_data *pgdat = NODE_DATA(nid);
 	struct zone *zone = pgdat->node_zones + ZONE_NORMAL;
-	unsigned long last_mapped_pfn, start_pfn = start >> PAGE_SHIFT;
+	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
 	int ret;
 
-	last_mapped_pfn = init_memory_mapping(start, start + size);
-	if (last_mapped_pfn > max_pfn_mapped)
-		max_pfn_mapped = last_mapped_pfn;
+	init_memory_mapping(start, start + size);
 
 	ret = __add_pages(nid, zone, start_pfn, nr_pages);
 	WARN_ON_ONCE(ret);
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (9 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:36   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping Yinghai Lu
  2012-09-02  7:46 ` [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill() Yinghai Lu
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel

From: Jacob Shin <jacob.shin@amd.com>

Current logic finds enough space for direct mapping page tables from 0
to end. Instead, we only need to find enough space to cover mr[0].start
to mr[nr_range].end -- the range that is actually being mapped by
init_memory_mapping()

This patch also reportedly fixes suspend/resume issue reported in:

https://lkml.org/lkml/2012/8/11/83

-v2: update with calculate_table_space_size()
     clear max_pfn_mapped before init_all_memory_mapping to get right value
					  -Yinghai Lu

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
---
 arch/x86/mm/init.c |   51 ++++++++++++++++++++++++++++++---------------------
 1 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 47b6e41..7830db9 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -191,39 +191,48 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
 	return nr_range;
 }
 
-static unsigned long __init calculate_table_space_size(unsigned long begin,
+static unsigned long __init calculate_table_space_size(unsigned long start,
 					  unsigned long end)
 {
-	unsigned long puds, pmds, ptes, tables;
+	unsigned long puds = 0, pmds = 0, ptes = 0, tables;
+	struct map_range mr[NR_RANGE_MR];
+	int nr_range, i;
 
-	puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
-	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+	pr_info("calculate_table_space_size: [mem %#010lx-%#010lx]\n",
+	       start, end - 1);
 
-	if (page_size_mask & (1 << PG_LEVEL_1G)) {
-		unsigned long extra;
+	memset(mr, 0, sizeof(mr));
+	nr_range = 0;
+	nr_range = split_mem_range(mr, nr_range, start, end);
 
-		extra = end - ((end>>PUD_SHIFT) << PUD_SHIFT);
-		pmds = (extra + PMD_SIZE - 1) >> PMD_SHIFT;
-	} else
-		pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
+	for (i = 0; i < nr_range; i++) {
+		unsigned long range, extra;
 
-	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
+		range = mr[i].end - mr[i].start;
+		puds += (range + PUD_SIZE - 1) >> PUD_SHIFT;
 
-	if (page_size_mask & (1 << PG_LEVEL_2M)) {
-		unsigned long extra;
+		if (mr[i].page_size_mask & (1 << PG_LEVEL_1G)) {
+			extra = range - ((range >> PUD_SHIFT) << PUD_SHIFT);
+			pmds += (extra + PMD_SIZE - 1) >> PMD_SHIFT;
+		} else
+			pmds += (range + PMD_SIZE - 1) >> PMD_SHIFT;
 
-		extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT);
+		if (mr[i].page_size_mask & (1 << PG_LEVEL_2M)) {
+			extra = range - ((range >> PMD_SHIFT) << PMD_SHIFT);
 #ifdef CONFIG_X86_32
-		extra += PMD_SIZE;
+			extra += PMD_SIZE;
 #endif
-		/* The first 2/4M doesn't use large pages. */
-		if (begin < PMD_SIZE)
-			extra += (PMD_SIZE - begin) >> PAGE_SHIFT;
+			/* The first 2/4M doesn't use large pages. */
+			if (mr[i].start < PMD_SIZE)
+				extra += range;
 
-		ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	} else
-		ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
+			ptes += (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
+		} else
+			ptes += (range + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	}
 
+	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
+	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
 	tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
 
 #ifdef CONFIG_X86_32
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping.
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (10 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:43   ` Pekka Enberg
  2012-09-02  7:46 ` [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill() Yinghai Lu
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

They all need to go over ram range in same sequence. So add shared function
to reduce duplicated code.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/mm/init.c |   64 ++++++++++++++++++---------------------------------
 1 files changed, 23 insertions(+), 41 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7830db9..343d925 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -243,14 +243,15 @@ static unsigned long __init calculate_table_space_size(unsigned long start,
 	return tables;
 }
 
-static unsigned long __init calculate_all_table_space_size(void)
+static void __init with_all_ram_ranges(
+			void (*work_fn)(unsigned long, unsigned long, void *),
+			void *data)
 {
 	unsigned long start_pfn, end_pfn;
-	unsigned long tables;
 	int i;
 
 	/* the ISA range is always mapped regardless of memory holes */
-	tables = calculate_table_space_size(0, ISA_END_ADDRESS);
+	work_fn(0, ISA_END_ADDRESS, data);
 
 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
 		u64 start = start_pfn << PAGE_SHIFT;
@@ -269,10 +270,15 @@ static unsigned long __init calculate_all_table_space_size(void)
 		if ((end >> PAGE_SHIFT) > max_low_pfn)
 			end = max_low_pfn << PAGE_SHIFT;
 #endif
-		tables += calculate_table_space_size(start, end);
+		work_fn(start, end, data);
 	}
+}
 
-	return tables;
+static void __init size_work_fn(unsigned long start, unsigned long end, void *data)
+{
+	unsigned long *size = data;
+
+	*size += calculate_table_space_size(start, end);
 }
 
 static void __init find_early_table_space(unsigned long start,
@@ -361,45 +367,15 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
  * Depending on the alignment of E820 ranges, this may possibly result in using
  * smaller size (i.e. 4K instead of 2M or 1G) page tables.
  */
-static void __init init_all_memory_mapping(void)
+static void __init mapping_work_fn(unsigned long start, unsigned long end,
+					 void *data)
 {
-	unsigned long start_pfn, end_pfn;
-	int i;
-
-	/* the ISA range is always mapped regardless of memory holes */
-	init_memory_mapping(0, ISA_END_ADDRESS);
-
-	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
-		u64 start = start_pfn << PAGE_SHIFT;
-		u64 end = end_pfn << PAGE_SHIFT;
-
-		if (end <= ISA_END_ADDRESS)
-			continue;
-
-		if (start < ISA_END_ADDRESS)
-			start = ISA_END_ADDRESS;
-#ifdef CONFIG_X86_32
-		/* on 32 bit, we only map up to max_low_pfn */
-		if ((start >> PAGE_SHIFT) >= max_low_pfn)
-			continue;
-
-		if ((end >> PAGE_SHIFT) > max_low_pfn)
-			end = max_low_pfn << PAGE_SHIFT;
-#endif
-		init_memory_mapping(start, end);
-	}
-
-#ifdef CONFIG_X86_64
-	if (max_pfn > max_low_pfn) {
-		/* can we preseve max_low_pfn ?*/
-		max_low_pfn = max_pfn;
-	}
-#endif
+	init_memory_mapping(start, end);
 }
 
 void __init init_mem_mapping(void)
 {
-	unsigned long tables, good_end, end;
+	unsigned long tables = 0, good_end, end;
 
 	probe_page_size_mask();
 
@@ -417,15 +393,21 @@ void __init init_mem_mapping(void)
 	end = max_low_pfn << PAGE_SHIFT;
 	good_end = max_pfn_mapped << PAGE_SHIFT;
 #endif
-	tables = calculate_all_table_space_size();
+	with_all_ram_ranges(size_work_fn, &tables);
 	find_early_table_space(0, good_end, tables);
 	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
 		end - 1, pgt_buf_start << PAGE_SHIFT,
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 
 	max_pfn_mapped = 0; /* will get exact value next */
-	init_all_memory_mapping();
+	with_all_ram_ranges(mapping_work_fn, NULL);
 
+#ifdef CONFIG_X86_64
+	if (max_pfn > max_low_pfn) {
+		/* can we preseve max_low_pfn ?*/
+		max_low_pfn = max_pfn;
+	}
+#endif
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
 	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
                   ` (11 preceding siblings ...)
  2012-09-02  7:46 ` [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping Yinghai Lu
@ 2012-09-02  7:46 ` Yinghai Lu
  2012-09-03  5:50   ` Pekka Enberg
  12 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-02  7:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin, Tejun Heo
  Cc: linux-kernel, Yinghai Lu

This one intend to fix bugs:
when efi booting have too many memmap entries, will need to double memblock
memory array or reserved array.

For 64bit, We have low kernel mapping, and high kernel mapping.
high kernel mapping is done early in head_64.S.
low kernel mapping is done in init_memory_mapping.

Now we have max_pfn_mapped actually is for high mapping, so later we
need to get buff for the doubling memblock, but we can not use it.
as __va will return virtual address with low kernel mapping.

The patch will try to map first 1M range, and find early page table space
from BRK.

Also add max_pfn_high_mapped to track high mapped range.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
---
 arch/x86/include/asm/page_types.h |    1 +
 arch/x86/include/asm/pgtable.h    |    1 +
 arch/x86/kernel/head64.c          |    2 +-
 arch/x86/kernel/setup.c           |    2 ++
 arch/x86/mm/init.c                |   37 +++++++++++++++++++++++++++++++++++--
 arch/x86/mm/init_64.c             |    7 ++++++-
 6 files changed, 46 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index 54c9787..1903736 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -45,6 +45,7 @@ extern int devmem_is_allowed(unsigned long pagenr);
 
 extern unsigned long max_low_pfn_mapped;
 extern unsigned long max_pfn_mapped;
+extern unsigned long max_pfn_high_mapped;
 
 static inline phys_addr_t get_max_mapped(void)
 {
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index ae2cabb..c67a684 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -598,6 +598,7 @@ static inline int pgd_none(pgd_t pgd)
 
 extern int direct_gbpages;
 void init_mem_mapping(void);
+void early_init_mem_mapping(void);
 
 /* local pte updates need not use xchg for locking */
 static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 037df57..b11bde0 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -76,7 +76,7 @@ void __init x86_64_start_kernel(char * real_mode_data)
 	/* Make NULL pointers segfault */
 	zap_identity_mappings();
 
-	max_pfn_mapped = KERNEL_IMAGE_SIZE >> PAGE_SHIFT;
+	max_pfn_high_mapped = KERNEL_IMAGE_SIZE >> PAGE_SHIFT;
 
 	for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) {
 #ifdef CONFIG_EARLY_PRINTK
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 2eb91b7..b942036 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -896,6 +896,8 @@ void __init setup_arch(char **cmdline_p)
 
 	reserve_ibft_region();
 
+	early_init_mem_mapping();
+
 	/*
 	 * Need to conclude brk, before memblock_x86_fill()
 	 *  it could use memblock_find_in_range, could overlap with
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 343d925..c96b731 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -377,8 +377,6 @@ void __init init_mem_mapping(void)
 {
 	unsigned long tables = 0, good_end, end;
 
-	probe_page_size_mask();
-
 	/*
 	 * Find space for the kernel direct mapping tables.
 	 *
@@ -437,6 +435,41 @@ void __init init_mem_mapping(void)
 	early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
 }
 
+RESERVE_BRK(early_pgt_alloc, 65536);
+
+void  __init early_init_mem_mapping(void)
+{
+	unsigned long tables;
+	phys_addr_t base;
+	unsigned long start = 0, end = ISA_END_ADDRESS;
+
+	probe_page_size_mask();
+
+	if (max_pfn_mapped)
+		return;
+
+	tables = calculate_table_space_size(start, end);
+	base = __pa(extend_brk(tables, PAGE_SIZE));
+
+	pgt_buf_start = base >> PAGE_SHIFT;
+	pgt_buf_end = pgt_buf_start;
+	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
+
+	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
+		end - 1, pgt_buf_start << PAGE_SHIFT,
+		(pgt_buf_top << PAGE_SHIFT) - 1);
+
+	init_memory_mapping(start, end);
+
+	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] final\n",
+		end - 1, pgt_buf_start << PAGE_SHIFT,
+		(pgt_buf_end << PAGE_SHIFT) - 1);
+	/* return not used brk */
+	_brk_end -= (pgt_buf_top - pgt_buf_end) << PAGE_SHIFT;
+
+	pgt_buf_top = 0;
+}
+
 /*
  * devmem_is_allowed() checks to see if /dev/mem access to a certain address
  * is valid. The argument is a physical page number.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ab558eb..832ac89 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -286,6 +286,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
 	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE_NOCACHE);
 }
 
+unsigned long max_pfn_high_mapped;
 /*
  * The head.S code sets up the kernel high mapping:
  *
@@ -302,7 +303,8 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size)
 void __init cleanup_highmap(void)
 {
 	unsigned long vaddr = __START_KERNEL_map;
-	unsigned long vaddr_end = __START_KERNEL_map + (max_pfn_mapped << PAGE_SHIFT);
+	unsigned long vaddr_end = __START_KERNEL_map +
+				 (max_pfn_high_mapped << PAGE_SHIFT);
 	unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
 	pmd_t *pmd = level2_kernel_pgt;
 
@@ -312,6 +314,9 @@ void __init cleanup_highmap(void)
 		if (vaddr < (unsigned long) _text || vaddr > end)
 			set_pmd(pmd, __pmd(0));
 	}
+	max_pfn_high_mapped = __pa(end) >> PAGE_SHIFT;
+
+	pr_info("max_pfn_high_mapped: %lx\n", max_pfn_high_mapped);
 }
 
 static __ref void *alloc_low_page(unsigned long *phys)
-- 
1.7.7


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 01/13] x86, mm: Add global page_size_mask
  2012-09-02  7:46 ` [PATCH -v2 01/13] x86, mm: Add global page_size_mask Yinghai Lu
@ 2012-09-03  5:23   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:23 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> detect if need to use 1G or 2M and store them in page_size_mask.
>
> Only probe them one time.
>
> Suggested-by: Ingo Molnar <mingo@elte.hu>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 02/13] x86, mm: Split out split_mem_range
  2012-09-02  7:46 ` [PATCH -v2 02/13] x86, mm: Split out split_mem_range Yinghai Lu
@ 2012-09-03  5:23   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:23 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> from init_memory_mapping, so make init_memory_mapping readable.
>
> Suggested-by: Ingo Molnar <mingo@elte.hu>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling
  2012-09-02  7:46 ` [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling Yinghai Lu
@ 2012-09-03  5:24   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:24 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> from setup.c to mm/init.c
>
> So could update all related calling together later
>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit
  2012-09-02  7:46 ` [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
@ 2012-09-03  5:25   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:25 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> So we could put page table high again for 64bit.
>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

The changelog for this is too terse for me to actually understand why
this is needed.

> ---
>  arch/x86/mm/init.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 15a6a38..cca9b7d 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -76,8 +76,8 @@ static void __init find_early_table_space(struct map_range *mr,
>  #ifdef CONFIG_X86_32
>         /* for fixmap */
>         tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
> -#endif
>         good_end = max_pfn_mapped << PAGE_SHIFT;
> +#endif
>
>         base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
>         if (!base)
> --
> 1.7.7
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 05/13] x86, mm: Find early page table only one time
  2012-09-02  7:46 ` [PATCH -v2 05/13] x86, mm: Find early page table only one time Yinghai Lu
@ 2012-09-03  5:27   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:27 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> Should not do that in every calling of init_memory_mapping.
> Actually in early time, only need do once.
>
> Also move down early_memtest.
>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

The changelog is too terse for my liking. I think it could use some
more context on what the code is actually doing now and why the change
makes it better.

> ---
>  arch/x86/mm/init.c |   72 ++++++++++++++++++++++++++-------------------------
>  1 files changed, 37 insertions(+), 35 deletions(-)
>
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index cca9b7d..0ada295 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -37,7 +37,7 @@ struct map_range {
>
>  static int page_size_mask;
>
> -static void __init find_early_table_space(struct map_range *mr,
> +static void __init find_early_table_space(unsigned long begin,
>                                           unsigned long end)
>  {
>         unsigned long puds, pmds, ptes, tables, start = 0, good_end = end;
> @@ -64,8 +64,8 @@ static void __init find_early_table_space(struct map_range *mr,
>                 extra += PMD_SIZE;
>  #endif
>                 /* The first 2/4M doesn't use large pages. */
> -               if (mr->start < PMD_SIZE)
> -                       extra += mr->end - mr->start;
> +               if (begin < PMD_SIZE)
> +                       extra += (PMD_SIZE - begin) >> PAGE_SHIFT;
>
>                 ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
>         } else
> @@ -265,16 +265,6 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>         nr_range = 0;
>         nr_range = split_mem_range(mr, nr_range, start, end);
>
> -       /*
> -        * Find space for the kernel direct mapping tables.
> -        *
> -        * Later we should allocate these tables in the local node of the
> -        * memory mapped. Unfortunately this is done currently before the
> -        * nodes are discovered.
> -        */
> -       if (!after_bootmem)
> -               find_early_table_space(&mr[0], end);
> -
>         for (i = 0; i < nr_range; i++)
>                 ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
>                                                    mr[i].page_size_mask);
> @@ -287,6 +277,36 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>
>         __flush_tlb_all();
>
> +       return ret >> PAGE_SHIFT;
> +}
> +
> +void __init init_mem_mapping(void)
> +{
> +       probe_page_size_mask();
> +
> +       /*
> +        * Find space for the kernel direct mapping tables.
> +        *
> +        * Later we should allocate these tables in the local node of the
> +        * memory mapped. Unfortunately this is done currently before the
> +        * nodes are discovered.
> +        */
> +#ifdef CONFIG_X86_64
> +       find_early_table_space(0, max_pfn<<PAGE_SHIFT);
> +#else
> +       find_early_table_space(0, max_low_pfn<<PAGE_SHIFT);
> +#endif
> +       max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
> +       max_pfn_mapped = max_low_pfn_mapped;
> +
> +#ifdef CONFIG_X86_64
> +       if (max_pfn > max_low_pfn) {
> +               max_pfn_mapped = init_memory_mapping(1UL<<32,
> +                                                    max_pfn<<PAGE_SHIFT);
> +               /* can we preseve max_low_pfn ?*/
> +               max_low_pfn = max_pfn;
> +       }
> +#endif
>         /*
>          * Reserve the kernel pagetable pages we used (pgt_buf_start -
>          * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> @@ -302,32 +322,14 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>          * RO all the pagetable pages, including the ones that are beyond
>          * pgt_buf_end at that time.
>          */
> -       if (!after_bootmem && pgt_buf_end > pgt_buf_start)
> +       if (pgt_buf_end > pgt_buf_start)
>                 x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
>                                 PFN_PHYS(pgt_buf_end));
>
> -       if (!after_bootmem)
> -               early_memtest(start, end);
> +       /* stop the wrong using */
> +       pgt_buf_top = 0;
>
> -       return ret >> PAGE_SHIFT;
> -}
> -
> -void __init init_mem_mapping(void)
> -{
> -       probe_page_size_mask();
> -
> -       /* max_pfn_mapped is updated here */
> -       max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
> -       max_pfn_mapped = max_low_pfn_mapped;
> -
> -#ifdef CONFIG_X86_64
> -       if (max_pfn > max_low_pfn) {
> -               max_pfn_mapped = init_memory_mapping(1UL<<32,
> -                                                    max_pfn<<PAGE_SHIFT);
> -               /* can we preseve max_low_pfn ?*/
> -               max_low_pfn = max_pfn;
> -       }
> -#endif
> +       early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
>  }
>
>  /*
> --
> 1.7.7
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size()
  2012-09-02  7:46 ` [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
@ 2012-09-03  5:28   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:28 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> it should take physical address range that will need to be mapped.
> and find_early_table_space should take range that pgt buff should be in.
> Separate those two to reduce confusion.
>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down.
  2012-09-02  7:46 ` [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down Yinghai Lu
@ 2012-09-03  5:29   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:29 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> So later could make it call split_mem_range...
>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

The commit title is utterly confusing. And it has a trailing dot (".").

As for the actual change:

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix
  2012-09-02  7:46 ` [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
@ 2012-09-03  5:31   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:31 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> From: Jacob Shin <jacob.shin@amd.com>
>
> There could be cases where user supplied memmap=exactmap memory
> mappings do not mark the region where the kernel .text .data and
> .bss reside as E820_RAM, as reported here:
>
> https://lkml.org/lkml/2012/8/14/86
>
> Handle it by complaining, and adding the range back into the e820.
>
> Signed-off-by: Jacob Shin <jacob.shin@amd.com>

This should have Yinghai's sign-off and the warning could be less cryptic.

As for the fix itself:

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped
  2012-09-02  7:46 ` [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped Yinghai Lu
@ 2012-09-03  5:32   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:32 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> From: Jacob Shin <jacob.shin@amd.com>
>
> Update code that previously assumed pfns [ 0 - max_low_pfn_mapped ) and
> [ 4GB - max_pfn_mapped ) were always direct mapped, to now look up
> pfn_mapped ranges instead.

What problem does this fix? How did you find about it?

> -v2: change applying sequence to keep git bisecting working.
>      so add dummy pfn_range_is_mapped(). - Yinghai Lu
>
> Signed-off-by: Jacob Shin <jacob.shin@amd.com>

Yinghai's sign-off is missing.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM
  2012-09-02  7:46 ` [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM Yinghai Lu
@ 2012-09-03  5:34   ` Pekka Enberg
  0 siblings, 0 replies; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:34 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> From: Jacob Shin <jacob.shin@amd.com>
>
> Currently direct mappings are created for [ 0 to max_low_pfn<<PAGE_SHIFT )
> and [ 4GB to max_pfn<<PAGE_SHIFT ), which may include regions that are not
> backed by actual DRAM. This is fine for holes under 4GB which are covered
> by fixed and variable range MTRRs to be UC. However, we run into trouble
> on higher memory addresses which cannot be covered by MTRRs.
>
> Our system with 1TB of RAM has an e820 that looks like this:
>
>  BIOS-e820: [mem 0x0000000000000000-0x00000000000983ff] usable
>  BIOS-e820: [mem 0x0000000000098400-0x000000000009ffff] reserved
>  BIOS-e820: [mem 0x00000000000d0000-0x00000000000fffff] reserved
>  BIOS-e820: [mem 0x0000000000100000-0x00000000c7ebffff] usable
>  BIOS-e820: [mem 0x00000000c7ec0000-0x00000000c7ed7fff] ACPI data
>  BIOS-e820: [mem 0x00000000c7ed8000-0x00000000c7ed9fff] ACPI NVS
>  BIOS-e820: [mem 0x00000000c7eda000-0x00000000c7ffffff] reserved
>  BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
>  BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
>  BIOS-e820: [mem 0x00000000fff00000-0x00000000ffffffff] reserved
>  BIOS-e820: [mem 0x0000000100000000-0x000000e037ffffff] usable
>  BIOS-e820: [mem 0x000000e038000000-0x000000fcffffffff] reserved
>  BIOS-e820: [mem 0x0000010000000000-0x0000011ffeffffff] usable
>
> and so direct mappings are created for huge memory hole between
> 0x000000e038000000 to 0x0000010000000000. Even though the kernel never
> generates memory accesses in that region, since the page tables mark
> them incorrectly as being WB, our (AMD) processor ends up causing a MCE
> while doing some memory bookkeeping/optimizations around that area.
>
> This patch iterates through e820 and only direct maps ranges that are
> marked as E820_RAM, and keeps track of those pfn ranges. Depending on
> the alignment of E820 ranges, this may possibly result in using smaller
> size (i.e. 4K instead of 2M or 1G) page tables.
>
> -v2: move changes from setup.c to mm/init.c, also use for_each_mem_pfn_range
>         instead.  - Yinghai Lu
> -v3: add calculate_all_table_space_size() to get correct needed page table
>         size. - Yinghai Lu
>
> Signed-off-by: Jacob Shin <jacob.shin@amd.com>

Yinghai's sign-off is missing.

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped
  2012-09-02  7:46 ` [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu
@ 2012-09-03  5:36   ` Pekka Enberg
  2012-09-03  6:21     ` Yinghai Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:36 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> From: Jacob Shin <jacob.shin@amd.com>
>
> Current logic finds enough space for direct mapping page tables from 0
> to end. Instead, we only need to find enough space to cover mr[0].start
> to mr[nr_range].end -- the range that is actually being mapped by
> init_memory_mapping()
>
> This patch also reportedly fixes suspend/resume issue reported in:
>
> https://lkml.org/lkml/2012/8/11/83
>
> -v2: update with calculate_table_space_size()
>      clear max_pfn_mapped before init_all_memory_mapping to get right value
>                                           -Yinghai Lu
>
> Signed-off-by: Jacob Shin <jacob.shin@amd.com>

Yinghai's sign-off is missing.

Reviewed-by: Pekka Enberg <penberg@kernel.org>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping.
  2012-09-02  7:46 ` [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping Yinghai Lu
@ 2012-09-03  5:43   ` Pekka Enberg
  2012-09-03  6:21     ` Yinghai Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:43 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> +static void __init with_all_ram_ranges(
> +                       void (*work_fn)(unsigned long, unsigned long, void *),
> +                       void *data)

> +static void __init size_work_fn(unsigned long start, unsigned long end, void *data)

> +static void __init mapping_work_fn(unsigned long start, unsigned long end,
> +                                        void *data)

So I passionately hate the naming convention. How about something
similar to mm/pagewalk.c:

  s/with_all_ram_ranges/walk_ram_ranges/g

  s/size_work_fn/table_space_size/g

  s/mapping_work_fn/map_memory/g

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-02  7:46 ` [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill() Yinghai Lu
@ 2012-09-03  5:50   ` Pekka Enberg
  2012-09-03  6:17     ` Yinghai Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  5:50 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> This one intend to fix bugs:
> when efi booting have too many memmap entries, will need to double memblock
> memory array or reserved array.

Okay, why do we need to do that?

> +RESERVE_BRK(early_pgt_alloc, 65536);

What is this needed for?

> +void  __init early_init_mem_mapping(void)
> +{
> +       unsigned long tables;
> +       phys_addr_t base;
> +       unsigned long start = 0, end = ISA_END_ADDRESS;
> +
> +       probe_page_size_mask();
> +
> +       if (max_pfn_mapped)
> +               return;

I find this confusing - what is this protecting for? Why is
'max_pfn_mapped' set when someone calls early_init_mem_mappings()?

Side note: we have multiple "pfn_mapped" globals and it's not at all
obvious to me what the semantics for them are. Maybe adding a comment
or two in arch/x86/include/asm/page_types.h would help.

> +
> +       tables = calculate_table_space_size(start, end);
> +       base = __pa(extend_brk(tables, PAGE_SIZE));
> +

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-03  5:50   ` Pekka Enberg
@ 2012-09-03  6:17     ` Yinghai Lu
  2012-09-03  6:26       ` Pekka Enberg
  0 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-03  6:17 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:50 PM, Pekka Enberg <penberg@kernel.org> wrote:
> On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
>> This one intend to fix bugs:
>> when efi booting have too many memmap entries, will need to double memblock
>> memory array or reserved array.
>
> Okay, why do we need to do that?

memblock initial memory only have 128 entry, and some EFI system could
have more entries than that.

So during memblock_x86_fill need to double that array.

and efi_reserve_boot_services() could make thing more worse. aka need
more entries in memblock.memory.regions.

>
>> +RESERVE_BRK(early_pgt_alloc, 65536);
>
> What is this needed for?

for extra page table, and extend_brk will consume that.

>
>> +void  __init early_init_mem_mapping(void)
>> +{
>> +       unsigned long tables;
>> +       phys_addr_t base;
>> +       unsigned long start = 0, end = ISA_END_ADDRESS;
>> +
>> +       probe_page_size_mask();
>> +
>> +       if (max_pfn_mapped)
>> +               return;
>
> I find this confusing - what is this protecting for? Why is
> 'max_pfn_mapped' set when someone calls early_init_mem_mappings()?

for 32 bit, it will non zero max_pfn_mapped set in head_32.S

>
> Side note: we have multiple "pfn_mapped" globals and it's not at all
> obvious to me what the semantics for them are. Maybe adding a comment
> or two in arch/x86/include/asm/page_types.h would help.

move the comments  from arch/x86/kernel/setup.c to that header file ?

Thanks

Yinghai

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping.
  2012-09-03  5:43   ` Pekka Enberg
@ 2012-09-03  6:21     ` Yinghai Lu
  0 siblings, 0 replies; 33+ messages in thread
From: Yinghai Lu @ 2012-09-03  6:21 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:43 PM, Pekka Enberg <penberg@kernel.org> wrote:
> On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
>> +static void __init with_all_ram_ranges(
>> +                       void (*work_fn)(unsigned long, unsigned long, void *),
>> +                       void *data)
>
>> +static void __init size_work_fn(unsigned long start, unsigned long end, void *data)
>
>> +static void __init mapping_work_fn(unsigned long start, unsigned long end,
>> +                                        void *data)
>
> So I passionately hate the naming convention. How about something
> similar to mm/pagewalk.c:
>
>   s/with_all_ram_ranges/walk_ram_ranges/g

ok.

>
>   s/size_work_fn/table_space_size/g
>
>   s/mapping_work_fn/map_memory/g

I would prefer simple one size_work_fn, and mapping_working_fn.

Thanks

Yinghai

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped
  2012-09-03  5:36   ` Pekka Enberg
@ 2012-09-03  6:21     ` Yinghai Lu
  0 siblings, 0 replies; 33+ messages in thread
From: Yinghai Lu @ 2012-09-03  6:21 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Sun, Sep 2, 2012 at 10:36 PM, Pekka Enberg <penberg@kernel.org> wrote:
>
> Yinghai's sign-off is missing.

will add that next version if needed.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-03  6:17     ` Yinghai Lu
@ 2012-09-03  6:26       ` Pekka Enberg
  2012-09-03  7:18         ` Yinghai Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Pekka Enberg @ 2012-09-03  6:26 UTC (permalink / raw)
  To: Yinghai Lu
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Mon, Sep 3, 2012 at 9:17 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> On Sun, Sep 2, 2012 at 10:50 PM, Pekka Enberg <penberg@kernel.org> wrote:
>> On Sun, Sep 2, 2012 at 10:46 AM, Yinghai Lu <yinghai@kernel.org> wrote:
>>> This one intend to fix bugs:
>>> when efi booting have too many memmap entries, will need to double memblock
>>> memory array or reserved array.
>>
>> Okay, why do we need to do that?
>
> memblock initial memory only have 128 entry, and some EFI system could
> have more entries than that.
>
> So during memblock_x86_fill need to double that array.
>
> and efi_reserve_boot_services() could make thing more worse. aka need
> more entries in memblock.memory.regions.

Aah. Care to put that information in the changelog?

>>> +void  __init early_init_mem_mapping(void)
>>> +{
>>> +       unsigned long tables;
>>> +       phys_addr_t base;
>>> +       unsigned long start = 0, end = ISA_END_ADDRESS;
>>> +
>>> +       probe_page_size_mask();
>>> +
>>> +       if (max_pfn_mapped)
>>> +               return;
>>
>> I find this confusing - what is this protecting for? Why is
>> 'max_pfn_mapped' set when someone calls early_init_mem_mappings()?
>
> for 32 bit, it will non zero max_pfn_mapped set in head_32.S

OK, that's why my grep missed it. A comment would be nice.

>> Side note: we have multiple "pfn_mapped" globals and it's not at all
>> obvious to me what the semantics for them are. Maybe adding a comment
>> or two in arch/x86/include/asm/page_types.h would help.
>
> move the comments  from arch/x86/kernel/setup.c to that header file ?

Yup, or move the globals together with the comment to arch/x86/mm/init.c.

That said, max_pfn_high_mapped really ought to be kept together with
the other "pfn_mapped" globals and the comment should be updated.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-03  6:26       ` Pekka Enberg
@ 2012-09-03  7:18         ` Yinghai Lu
  2012-09-04  2:48           ` Yinghai Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Yinghai Lu @ 2012-09-03  7:18 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 525 bytes --]

On Sun, Sep 2, 2012 at 11:26 PM, Pekka Enberg <penberg@kernel.org> wrote:
>
> Yup, or move the globals together with the comment to arch/x86/mm/init.c.
>
> That said, max_pfn_high_mapped really ought to be kept together with
> the other "pfn_mapped" globals and the comment should be updated.

max_pfn_high_mapped is only for 64bit, and it is in init_64.c

maybe later could have another patch to move max_pfn_mapped,
max_low_pfn_mapped from
kernel/setup.c to mm/init.c

Please check attached updated patch.

Thanks

Yinghai

[-- Attachment #2: fix_max_pfn_mapped_xx.patch --]
[-- Type: application/octet-stream, Size: 6286 bytes --]

Subject: [PATCH] x86, 64bit: Map first 1M ram early before memblock_x86_fill()

This one intend to fix bugs:
memblock initial memory only have 128 entry, and some EFI system could
have more entries than that.
So during memblock_x86_fill need to double that array.
and efi_reserve_boot_services() could make thing more worse. aka need
more entries in memblock.memory.regions.

For 64bit, We have low kernel mapping, and high kernel mapping.
high kernel mapping is done early in head_64.S.
low kernel mapping is done in init_memory_mapping.

Now we have max_pfn_mapped actually is for high mapping, later when we
need to get buff for the doubling memblock, we could get buf for that new
array, but we can not use it.
As __va() in double_array() will return virtual address with low kernel mapping.

The patch will try to map first 1M range, and find early page table space
from BRK.

Also add max_pfn_high_mapped to track high mapped range, so we could keep
initial max_pfn_mapped to 0 for 64bit.

-v2: Update changelog and comments according to Pekka Enberg.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>

---
 arch/x86/include/asm/page_types.h |    1 
 arch/x86/include/asm/pgtable.h    |    1 
 arch/x86/kernel/head64.c          |    2 -
 arch/x86/kernel/setup.c           |    2 +
 arch/x86/mm/init.c                |   41 ++++++++++++++++++++++++++++++++++++--
 arch/x86/mm/init_64.c             |    9 +++++++-
 6 files changed, 52 insertions(+), 4 deletions(-)

Index: linux-2.6/arch/x86/include/asm/page_types.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/page_types.h
+++ linux-2.6/arch/x86/include/asm/page_types.h
@@ -45,6 +45,7 @@ extern int devmem_is_allowed(unsigned lo
 
 extern unsigned long max_low_pfn_mapped;
 extern unsigned long max_pfn_mapped;
+extern unsigned long max_pfn_high_mapped;
 
 static inline phys_addr_t get_max_mapped(void)
 {
Index: linux-2.6/arch/x86/include/asm/pgtable.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/pgtable.h
+++ linux-2.6/arch/x86/include/asm/pgtable.h
@@ -598,6 +598,7 @@ static inline int pgd_none(pgd_t pgd)
 
 extern int direct_gbpages;
 void init_mem_mapping(void);
+void early_init_mem_mapping(void);
 
 /* local pte updates need not use xchg for locking */
 static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
Index: linux-2.6/arch/x86/kernel/head64.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/head64.c
+++ linux-2.6/arch/x86/kernel/head64.c
@@ -76,7 +76,7 @@ void __init x86_64_start_kernel(char * r
 	/* Make NULL pointers segfault */
 	zap_identity_mappings();
 
-	max_pfn_mapped = KERNEL_IMAGE_SIZE >> PAGE_SHIFT;
+	max_pfn_high_mapped = KERNEL_IMAGE_SIZE >> PAGE_SHIFT;
 
 	for (i = 0; i < NUM_EXCEPTION_VECTORS; i++) {
 #ifdef CONFIG_EARLY_PRINTK
Index: linux-2.6/arch/x86/kernel/setup.c
===================================================================
--- linux-2.6.orig/arch/x86/kernel/setup.c
+++ linux-2.6/arch/x86/kernel/setup.c
@@ -896,6 +896,8 @@ void __init setup_arch(char **cmdline_p)
 
 	reserve_ibft_region();
 
+	early_init_mem_mapping();
+
 	/*
 	 * Need to conclude brk, before memblock_x86_fill()
 	 *  it could use memblock_find_in_range, could overlap with
Index: linux-2.6/arch/x86/mm/init.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/init.c
+++ linux-2.6/arch/x86/mm/init.c
@@ -377,8 +377,6 @@ void __init init_mem_mapping(void)
 {
 	unsigned long tables = 0, good_end, end;
 
-	probe_page_size_mask();
-
 	/*
 	 * Find space for the kernel direct mapping tables.
 	 *
@@ -437,6 +435,45 @@ void __init init_mem_mapping(void)
 	early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
 }
 
+RESERVE_BRK(early_pgt_alloc, 65536);
+
+void  __init early_init_mem_mapping(void)
+{
+	unsigned long tables;
+	phys_addr_t base;
+	unsigned long start = 0, end = ISA_END_ADDRESS;
+
+	probe_page_size_mask();
+
+	/*
+	 * 64bit at this point max_pfn_mapped should on 0
+	 * 32bit should have that correct set in head_32.S, aka non-zero.
+	 */
+	if (max_pfn_mapped)
+		return;
+
+	tables = calculate_table_space_size(start, end);
+	base = __pa(extend_brk(tables, PAGE_SIZE));
+
+	pgt_buf_start = base >> PAGE_SHIFT;
+	pgt_buf_end = pgt_buf_start;
+	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
+
+	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
+		end - 1, pgt_buf_start << PAGE_SHIFT,
+		(pgt_buf_top << PAGE_SHIFT) - 1);
+
+	init_memory_mapping(start, end);
+
+	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] final\n",
+		end - 1, pgt_buf_start << PAGE_SHIFT,
+		(pgt_buf_end << PAGE_SHIFT) - 1);
+	/* return not used brk */
+	_brk_end -= (pgt_buf_top - pgt_buf_end) << PAGE_SHIFT;
+
+	pgt_buf_top = 0;
+}
+
 /*
  * devmem_is_allowed() checks to see if /dev/mem access to a certain address
  * is valid. The argument is a physical page number.
Index: linux-2.6/arch/x86/mm/init_64.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/init_64.c
+++ linux-2.6/arch/x86/mm/init_64.c
@@ -286,6 +286,9 @@ void __init init_extra_mapping_uc(unsign
 	__init_extra_mapping(phys, size, PAGE_KERNEL_LARGE_NOCACHE);
 }
 
+/* max_pfn_high_mapped: highest mapped pfn with high kernel mapping */
+unsigned long max_pfn_high_mapped;
+
 /*
  * The head.S code sets up the kernel high mapping:
  *
@@ -302,7 +305,8 @@ void __init init_extra_mapping_uc(unsign
 void __init cleanup_highmap(void)
 {
 	unsigned long vaddr = __START_KERNEL_map;
-	unsigned long vaddr_end = __START_KERNEL_map + (max_pfn_mapped << PAGE_SHIFT);
+	unsigned long vaddr_end = __START_KERNEL_map +
+				 (max_pfn_high_mapped << PAGE_SHIFT);
 	unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
 	pmd_t *pmd = level2_kernel_pgt;
 
@@ -312,6 +316,9 @@ void __init cleanup_highmap(void)
 		if (vaddr < (unsigned long) _text || vaddr > end)
 			set_pmd(pmd, __pmd(0));
 	}
+	max_pfn_high_mapped = __pa(end) >> PAGE_SHIFT;
+
+	pr_info("max_pfn_high_mapped: %lx\n", max_pfn_high_mapped);
 }
 
 static __ref void *alloc_low_page(unsigned long *phys)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill()
  2012-09-03  7:18         ` Yinghai Lu
@ 2012-09-04  2:48           ` Yinghai Lu
  0 siblings, 0 replies; 33+ messages in thread
From: Yinghai Lu @ 2012-09-04  2:48 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Jacob Shin,
	Tejun Heo, linux-kernel

On Mon, Sep 3, 2012 at 12:18 AM, Yinghai Lu <yinghai@kernel.org> wrote:
> On Sun, Sep 2, 2012 at 11:26 PM, Pekka Enberg <penberg@kernel.org> wrote:
>>
>> Yup, or move the globals together with the comment to arch/x86/mm/init.c.
>>
>> That said, max_pfn_high_mapped really ought to be kept together with
>> the other "pfn_mapped" globals and the comment should be updated.
>
> max_pfn_high_mapped is only for 64bit, and it is in init_64.c
>
> maybe later could have another patch to move max_pfn_mapped,
> max_low_pfn_mapped from
> kernel/setup.c to mm/init.c
>
> Please check attached updated patch.

oh,  I'm wrong.

this patch is not needed, until some day we could not need to do low
kernel mapping in head_64.S

Thanks

Yinghai

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2012-09-04  2:48 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-02  7:46 [PATCH -v2 00/13] x86, mm: init_memory_mapping cleanup Yinghai Lu
2012-09-02  7:46 ` [PATCH -v2 01/13] x86, mm: Add global page_size_mask Yinghai Lu
2012-09-03  5:23   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 02/13] x86, mm: Split out split_mem_range Yinghai Lu
2012-09-03  5:23   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 03/13] x86, mm: Moving init_memory_mapping calling Yinghai Lu
2012-09-03  5:24   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 04/13] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
2012-09-03  5:25   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 05/13] x86, mm: Find early page table only one time Yinghai Lu
2012-09-03  5:27   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 06/13] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
2012-09-03  5:28   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 07/13] x86, mm: Move down two calculate_table_space_size down Yinghai Lu
2012-09-03  5:29   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 08/13] x86: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
2012-09-03  5:31   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 09/13] x86: Fixup code testing if a pfn is direct mapped Yinghai Lu
2012-09-03  5:32   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 10/13] x86: Only direct map addresses that are marked as E820_RAM Yinghai Lu
2012-09-03  5:34   ` Pekka Enberg
2012-09-02  7:46 ` [PATCH -v2 11/13] x86/mm: calculate_table_space_size based on memory ranges that are being mapped Yinghai Lu
2012-09-03  5:36   ` Pekka Enberg
2012-09-03  6:21     ` Yinghai Lu
2012-09-02  7:46 ` [PATCH -v2 12/13] x86, mm: Use func pointer to table size calculation and mapping Yinghai Lu
2012-09-03  5:43   ` Pekka Enberg
2012-09-03  6:21     ` Yinghai Lu
2012-09-02  7:46 ` [PATCH -v2 13/13] x86, 64bit: Map first 1M ram early before memblock_x86_fill() Yinghai Lu
2012-09-03  5:50   ` Pekka Enberg
2012-09-03  6:17     ` Yinghai Lu
2012-09-03  6:26       ` Pekka Enberg
2012-09-03  7:18         ` Yinghai Lu
2012-09-04  2:48           ` Yinghai Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).