linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: tip-bot for Jacob Shin <jacob.shin@amd.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@kernel.org,
	yinghai@kernel.org, penberg@kernel.org, jacob.shin@amd.com,
	tglx@linutronix.de, hpa@linux.intel.com
Subject: [tip:x86/mm2] x86, mm: Only direct map addresses that are marked as E820_RAM
Date: Wed, 21 Nov 2012 17:53:01 -0800	[thread overview]
Message-ID: <tip-66520ebc2df3fe52eb4792f8101fac573b766baf@git.kernel.org> (raw)
In-Reply-To: <1353123563-3103-16-git-send-email-yinghai@kernel.org>

Commit-ID:  66520ebc2df3fe52eb4792f8101fac573b766baf
Gitweb:     http://git.kernel.org/tip/66520ebc2df3fe52eb4792f8101fac573b766baf
Author:     Jacob Shin <jacob.shin@amd.com>
AuthorDate: Fri, 16 Nov 2012 19:38:52 -0800
Committer:  H. Peter Anvin <hpa@linux.intel.com>
CommitDate: Sat, 17 Nov 2012 11:59:14 -0800

x86, mm: Only direct map addresses that are marked as E820_RAM

Currently direct mappings are created for [ 0 to max_low_pfn<<PAGE_SHIFT )
and [ 4GB to max_pfn<<PAGE_SHIFT ), which may include regions that are not
backed by actual DRAM. This is fine for holes under 4GB which are covered
by fixed and variable range MTRRs to be UC. However, we run into trouble
on higher memory addresses which cannot be covered by MTRRs.

Our system with 1TB of RAM has an e820 that looks like this:

 BIOS-e820: [mem 0x0000000000000000-0x00000000000983ff] usable
 BIOS-e820: [mem 0x0000000000098400-0x000000000009ffff] reserved
 BIOS-e820: [mem 0x00000000000d0000-0x00000000000fffff] reserved
 BIOS-e820: [mem 0x0000000000100000-0x00000000c7ebffff] usable
 BIOS-e820: [mem 0x00000000c7ec0000-0x00000000c7ed7fff] ACPI data
 BIOS-e820: [mem 0x00000000c7ed8000-0x00000000c7ed9fff] ACPI NVS
 BIOS-e820: [mem 0x00000000c7eda000-0x00000000c7ffffff] reserved
 BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
 BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
 BIOS-e820: [mem 0x00000000fff00000-0x00000000ffffffff] reserved
 BIOS-e820: [mem 0x0000000100000000-0x000000e037ffffff] usable
 BIOS-e820: [mem 0x000000e038000000-0x000000fcffffffff] reserved
 BIOS-e820: [mem 0x0000010000000000-0x0000011ffeffffff] usable

and so direct mappings are created for huge memory hole between
0x000000e038000000 to 0x0000010000000000. Even though the kernel never
generates memory accesses in that region, since the page tables mark
them incorrectly as being WB, our (AMD) processor ends up causing a MCE
while doing some memory bookkeeping/optimizations around that area.

This patch iterates through e820 and only direct maps ranges that are
marked as E820_RAM, and keeps track of those pfn ranges. Depending on
the alignment of E820 ranges, this may possibly result in using smaller
size (i.e. 4K instead of 2M or 1G) page tables.

-v2: move changes from setup.c to mm/init.c, also use for_each_mem_pfn_range
	instead.  - Yinghai Lu
-v3: add calculate_all_table_space_size() to get correct needed page table
	size. - Yinghai Lu
-v4: fix add_pfn_range_mapped() to get correct max_low_pfn_mapped when
     mem map does have hole under 4g that is found by Konard on xen
     domU with 8g ram. - Yinghai

Signed-off-by: Jacob Shin <jacob.shin@amd.com>
Link: http://lkml.kernel.org/r/1353123563-3103-16-git-send-email-yinghai@kernel.org
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
---
 arch/x86/include/asm/page_types.h |   8 +--
 arch/x86/kernel/setup.c           |   8 ++-
 arch/x86/mm/init.c                | 120 ++++++++++++++++++++++++++++++++++----
 arch/x86/mm/init_64.c             |   6 +-
 4 files changed, 117 insertions(+), 25 deletions(-)

diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index 45aae6e..54c9787 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -51,13 +51,7 @@ static inline phys_addr_t get_max_mapped(void)
 	return (phys_addr_t)max_pfn_mapped << PAGE_SHIFT;
 }
 
-static inline bool pfn_range_is_mapped(unsigned long start_pfn,
-					unsigned long end_pfn)
-{
-	return end_pfn <= max_low_pfn_mapped ||
-	       (end_pfn > (1UL << (32 - PAGE_SHIFT)) &&
-		end_pfn <= max_pfn_mapped);
-}
+bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn);
 
 extern unsigned long init_memory_mapping(unsigned long start,
 					 unsigned long end);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index bd52f9d..68dffec 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -116,9 +116,11 @@
 #include <asm/prom.h>
 
 /*
- * end_pfn only includes RAM, while max_pfn_mapped includes all e820 entries.
- * The direct mapping extends to max_pfn_mapped, so that we can directly access
- * apertures, ACPI and other tables without having to play with fixmaps.
+ * max_low_pfn_mapped: highest direct mapped pfn under 4GB
+ * max_pfn_mapped:     highest direct mapped pfn over 4GB
+ *
+ * The direct mapping only covers E820_RAM regions, so the ranges and gaps are
+ * represented by pfn_mapped
  */
 unsigned long max_low_pfn_mapped;
 unsigned long max_pfn_mapped;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7b961d0..bb44e9f 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -243,6 +243,38 @@ static unsigned long __init calculate_table_space_size(unsigned long start, unsi
 	return tables;
 }
 
+static unsigned long __init calculate_all_table_space_size(void)
+{
+	unsigned long start_pfn, end_pfn;
+	unsigned long tables;
+	int i;
+
+	/* the ISA range is always mapped regardless of memory holes */
+	tables = calculate_table_space_size(0, ISA_END_ADDRESS);
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		u64 start = start_pfn << PAGE_SHIFT;
+		u64 end = end_pfn << PAGE_SHIFT;
+
+		if (end <= ISA_END_ADDRESS)
+			continue;
+
+		if (start < ISA_END_ADDRESS)
+			start = ISA_END_ADDRESS;
+#ifdef CONFIG_X86_32
+		/* on 32 bit, we only map up to max_low_pfn */
+		if ((start >> PAGE_SHIFT) >= max_low_pfn)
+			continue;
+
+		if ((end >> PAGE_SHIFT) > max_low_pfn)
+			end = max_low_pfn << PAGE_SHIFT;
+#endif
+		tables += calculate_table_space_size(start, end);
+	}
+
+	return tables;
+}
+
 static void __init find_early_table_space(unsigned long start,
 					  unsigned long good_end,
 					  unsigned long tables)
@@ -258,6 +290,34 @@ static void __init find_early_table_space(unsigned long start,
 	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
 }
 
+static struct range pfn_mapped[E820_X_MAX];
+static int nr_pfn_mapped;
+
+static void add_pfn_range_mapped(unsigned long start_pfn, unsigned long end_pfn)
+{
+	nr_pfn_mapped = add_range_with_merge(pfn_mapped, E820_X_MAX,
+					     nr_pfn_mapped, start_pfn, end_pfn);
+	nr_pfn_mapped = clean_sort_range(pfn_mapped, E820_X_MAX);
+
+	max_pfn_mapped = max(max_pfn_mapped, end_pfn);
+
+	if (start_pfn < (1UL<<(32-PAGE_SHIFT)))
+		max_low_pfn_mapped = max(max_low_pfn_mapped,
+					 min(end_pfn, 1UL<<(32-PAGE_SHIFT)));
+}
+
+bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn)
+{
+	int i;
+
+	for (i = 0; i < nr_pfn_mapped; i++)
+		if ((start_pfn >= pfn_mapped[i].start) &&
+		    (end_pfn <= pfn_mapped[i].end))
+			return true;
+
+	return false;
+}
+
 /*
  * Setup the direct mapping of the physical memory at PAGE_OFFSET.
  * This runs before bootmem is initialized and gets pages directly from
@@ -288,9 +348,55 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
 
 	__flush_tlb_all();
 
+	add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
+
 	return ret >> PAGE_SHIFT;
 }
 
+/*
+ * Iterate through E820 memory map and create direct mappings for only E820_RAM
+ * regions. We cannot simply create direct mappings for all pfns from
+ * [0 to max_low_pfn) and [4GB to max_pfn) because of possible memory holes in
+ * high addresses that cannot be marked as UC by fixed/variable range MTRRs.
+ * Depending on the alignment of E820 ranges, this may possibly result in using
+ * smaller size (i.e. 4K instead of 2M or 1G) page tables.
+ */
+static void __init init_all_memory_mapping(void)
+{
+	unsigned long start_pfn, end_pfn;
+	int i;
+
+	/* the ISA range is always mapped regardless of memory holes */
+	init_memory_mapping(0, ISA_END_ADDRESS);
+
+	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+		u64 start = (u64)start_pfn << PAGE_SHIFT;
+		u64 end = (u64)end_pfn << PAGE_SHIFT;
+
+		if (end <= ISA_END_ADDRESS)
+			continue;
+
+		if (start < ISA_END_ADDRESS)
+			start = ISA_END_ADDRESS;
+#ifdef CONFIG_X86_32
+		/* on 32 bit, we only map up to max_low_pfn */
+		if ((start >> PAGE_SHIFT) >= max_low_pfn)
+			continue;
+
+		if ((end >> PAGE_SHIFT) > max_low_pfn)
+			end = max_low_pfn << PAGE_SHIFT;
+#endif
+		init_memory_mapping(start, end);
+	}
+
+#ifdef CONFIG_X86_64
+	if (max_pfn > max_low_pfn) {
+		/* can we preseve max_low_pfn ?*/
+		max_low_pfn = max_pfn;
+	}
+#endif
+}
+
 void __init init_mem_mapping(void)
 {
 	unsigned long tables, good_end, end;
@@ -311,23 +417,15 @@ void __init init_mem_mapping(void)
 	end = max_low_pfn << PAGE_SHIFT;
 	good_end = max_pfn_mapped << PAGE_SHIFT;
 #endif
-	tables = calculate_table_space_size(0, end);
+	tables = calculate_all_table_space_size();
 	find_early_table_space(0, good_end, tables);
 	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
 		end - 1, pgt_buf_start << PAGE_SHIFT,
 		(pgt_buf_top << PAGE_SHIFT) - 1);
 
-	max_low_pfn_mapped = init_memory_mapping(0, max_low_pfn<<PAGE_SHIFT);
-	max_pfn_mapped = max_low_pfn_mapped;
+	max_pfn_mapped = 0; /* will get exact value next */
+	init_all_memory_mapping();
 
-#ifdef CONFIG_X86_64
-	if (max_pfn > max_low_pfn) {
-		max_pfn_mapped = init_memory_mapping(1UL<<32,
-						     max_pfn<<PAGE_SHIFT);
-		/* can we preseve max_low_pfn ?*/
-		max_low_pfn = max_pfn;
-	}
-#endif
 	/*
 	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
 	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3baff25..32c7e38 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -662,13 +662,11 @@ int arch_add_memory(int nid, u64 start, u64 size)
 {
 	struct pglist_data *pgdat = NODE_DATA(nid);
 	struct zone *zone = pgdat->node_zones + ZONE_NORMAL;
-	unsigned long last_mapped_pfn, start_pfn = start >> PAGE_SHIFT;
+	unsigned long start_pfn = start >> PAGE_SHIFT;
 	unsigned long nr_pages = size >> PAGE_SHIFT;
 	int ret;
 
-	last_mapped_pfn = init_memory_mapping(start, start + size);
-	if (last_mapped_pfn > max_pfn_mapped)
-		max_pfn_mapped = last_mapped_pfn;
+	init_memory_mapping(start, start + size);
 
 	ret = __add_pages(nid, zone, start_pfn, nr_pages);
 	WARN_ON_ONCE(ret);

  reply	other threads:[~2012-11-22 18:58 UTC|newest]

Thread overview: 119+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-17  3:38 [PATCH v8 00/46] x86, mm: map ram from top-down with BRK and memblock Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 01/46] x86, mm: Add global page_size_mask and probe one time only Yinghai Lu
2012-11-22  1:38   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 02/46] x86, mm: Split out split_mem_range from init_memory_mapping Yinghai Lu
2012-11-22  1:39   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 03/46] x86, mm: Move down find_early_table_space() Yinghai Lu
2012-11-22  1:40   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 03/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 04/46] x86, mm: Move init_memory_mapping calling out of setup.c Yinghai Lu
2012-11-22  1:41   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 04/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 05/46] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
2012-11-22  1:42   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 06/46] x86, mm: Change find_early_table_space() paramters Yinghai Lu
2012-11-22  1:43   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 06/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 07/46] x86, mm: Find early page table buffer together Yinghai Lu
2012-11-22  1:44   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 07/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 08/46] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
2012-11-22  1:45   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:59   ` [PATCH v8 08/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 09/46] x86, mm: Set memblock initial limit to 1M Yinghai Lu
2012-11-22  1:46   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 10/46] x86, mm: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
2012-11-22  1:47   ` [tip:x86/mm2] " tip-bot for Jacob Shin
2012-11-17  3:38 ` [PATCH v8 11/46] x86, mm: Fixup code testing if a pfn is direct mapped Yinghai Lu
2012-11-22  1:48   ` [tip:x86/mm2] " tip-bot for Jacob Shin
2012-11-17  3:38 ` [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA Yinghai Lu
2012-11-22  1:49   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:06   ` [PATCH v8 12/46] " Konrad Rzeszutek Wilk
2012-11-28 19:33     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 13/46] x86, mm: use pfn_range_is_mapped() with gart Yinghai Lu
2012-11-22  1:50   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:07   ` [PATCH v8 13/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 14/46] x86, mm: use pfn_range_is_mapped() with reserve_initrd Yinghai Lu
2012-11-22  1:51   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:08   ` [PATCH v8 14/46] " Konrad Rzeszutek Wilk
2012-11-28 19:40     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 15/46] x86, mm: Only direct map addresses that are marked as E820_RAM Yinghai Lu
2012-11-22  1:53   ` tip-bot for Jacob Shin [this message]
2012-11-28 17:15   ` Konrad Rzeszutek Wilk
2012-11-28 19:43     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 16/46] x86, mm: relocate initrd under all mem for 64bit Yinghai Lu
2012-11-22  1:54   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 17/46] x86, mm: Align start address to correct big page size Yinghai Lu
2012-11-22  1:55   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 18/46] x86, mm: Use big page size for small memory range Yinghai Lu
2012-11-22  1:56   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 19/46] x86, mm: Don't clear page table if range is ram Yinghai Lu
2012-11-22  1:57   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 20/46] x86, mm: Break down init_all_memory_mapping Yinghai Lu
2012-11-22  1:58   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 21/46] x86, mm: setup page table in top-down Yinghai Lu
2012-11-22  1:59   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:50   ` [PATCH v8 21/46] " Konrad Rzeszutek Wilk
2012-11-28 20:16     ` Yinghai Lu
2012-12-05 21:53       ` Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 22/46] x86, mm: Remove early_memremap workaround for page table accessing on 64bit Yinghai Lu
2012-11-22  2:00   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 18:57   ` [PATCH v8 22/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 23/46] x86, mm: Remove parameter in alloc_low_page for 64bit Yinghai Lu
2012-11-22  2:01   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 18:57   ` [PATCH v8 23/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 24/46] x86, mm: Merge alloc_low_page between 64bit and 32bit Yinghai Lu
2012-11-22  2:02   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 25/46] x86, mm: Move min_pfn_mapped back to mm/init.c Yinghai Lu
2012-11-22  2:03   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 26/46] x86, mm, Xen: Remove mapping_pagetable_reserve() Yinghai Lu
2012-11-22  2:04   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 27/46] x86, mm: Add alloc_low_pages(num) Yinghai Lu
2012-11-22  2:05   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 28/46] x86, mm: Add pointer about Xen mmu requirement for alloc_low_pages Yinghai Lu
2012-11-22  2:06   ` [tip:x86/mm2] " tip-bot for Stefano Stabellini
2012-11-17  3:39 ` [PATCH v8 29/46] x86, mm: only call early_ioremap_page_table_range_init() once Yinghai Lu
2012-11-22  2:07   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:02   ` [PATCH v8 29/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 30/46] x86, mm: Move back pgt_buf_* to mm/init.c Yinghai Lu
2012-11-22  2:08   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 31/46] x86, mm: Move init_gbpages() out of setup.c Yinghai Lu
2012-11-22  2:09   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 32/46] x86, mm: change low/hignmem_pfn_init to static on 32bit Yinghai Lu
2012-11-22  2:10   ` [tip:x86/mm2] x86, mm: change low/ hignmem_pfn_init " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 33/46] x86, mm: Move function declaration into mm_internal.h Yinghai Lu
2012-11-22  2:11   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 34/46] x86, mm: Add check before clear pte above max_low_pfn on 32bit Yinghai Lu
2012-11-22  2:13   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:09   ` [PATCH v8 34/46] " Konrad Rzeszutek Wilk
2012-11-28 20:38     ` Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 35/46] x86, mm: use round_up/down in split_mem_range() Yinghai Lu
2012-11-22  2:14   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 36/46] x86, mm: use PFN_DOWN " Yinghai Lu
2012-11-22  2:15   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 37/46] x86, mm: use pfn instead of pos in split_mem_range Yinghai Lu
2012-11-22  2:16   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 38/46] x86, mm: use limit_pfn for end pfn Yinghai Lu
2012-11-22  2:17   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 39/46] x86, mm: Unifying after_bootmem for 32bit and 64bit Yinghai Lu
2012-11-22  2:18   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 40/46] x86, mm: Move after_bootmem to mm_internel.h Yinghai Lu
2012-11-22  2:19   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 41/46] x86, mm: Use clamp_t() in init_range_memory_mapping Yinghai Lu
2012-11-22  2:20   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 42/46] x86, mm: kill numa_free_all_bootmem() Yinghai Lu
2012-11-22  2:21   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 43/46] x86, mm: kill numa_64.h Yinghai Lu
2012-11-22  2:22   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 44/46] sparc, mm: Remove calling of free_all_bootmem_node() Yinghai Lu
2012-11-22  2:23   ` [tip:x86/mm2] sparc, mm: Remove calling of free_all_bootmem_node( ) tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 45/46] mm: Kill NO_BOOTMEM version free_all_bootmem_node() Yinghai Lu
2012-11-22  2:24   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 46/46] x86, mm: Let "memmap=" take more entries one time Yinghai Lu
2012-11-22  2:25   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:12   ` [PATCH v8 46/46] " Konrad Rzeszutek Wilk
2012-11-27 21:17 ` [PATCH v8 00/46] x86, mm: map ram from top-down with BRK and memblock Konrad Rzeszutek Wilk
2012-11-28 19:35   ` Konrad Rzeszutek Wilk
2012-11-28 19:47     ` Yinghai Lu
2012-11-28 20:57       ` Konrad Rzeszutek Wilk
2012-11-28 21:06         ` Yinghai Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-66520ebc2df3fe52eb4792f8101fac573b766baf@git.kernel.org \
    --to=jacob.shin@amd.com \
    --cc=hpa@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=penberg@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).