linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Yinghai Lu <yinghai@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>, Jacob Shin <jacob.shin@amd.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v8 21/46] x86, mm: setup page table in top-down
Date: Wed, 28 Nov 2012 12:50:07 -0500	[thread overview]
Message-ID: <20121128175007.GN21266@phenom.dumpdata.com> (raw)
In-Reply-To: <1353123563-3103-22-git-send-email-yinghai@kernel.org>

On Fri, Nov 16, 2012 at 07:38:58PM -0800, Yinghai Lu wrote:
> Get pgt_buf early from BRK, and use it to map PMD_SIZE from top at first.
> Then use mapped pages to map more ranges below, and keep looping until
> all pages get mapped.
> 
> alloc_low_page will use page from BRK at first, after that buffer is used
> up, will use memblock to find and reserve pages for page table usage.
> 
> Introduce min_pfn_mapped to make sure find new pages from mapped ranges,
> that will be updated when lower pages get mapped.
> 
> Also add step_size to make sure that don't try to map too big range with
> limited mapped pages initially, and increase the step_size when we have
> more mapped pages on hand.
> 
> We don't need to call pagetable_reserve anymore, reserve work is done
> in alloc_low_page() directly.
> 
> At last we can get rid of calculation and find early pgt related code.
> 
> -v2: update to after fix_xen change,
>      also use MACRO for initial pgt_buf size and add comments with it.
> -v3: skip big reserved range in memblock.reserved near end.
> -v4: don't need fix_xen change now.
> -v5: add changelog about moving about reserving pagetable to alloc_low_page.
> 
> Suggested-by: "H. Peter Anvin" <hpa@zytor.com>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>
> ---
>  arch/x86/include/asm/page_types.h |    1 +
>  arch/x86/include/asm/pgtable.h    |    1 +
>  arch/x86/kernel/setup.c           |    3 +
>  arch/x86/mm/init.c                |  210 +++++++++++--------------------------
>  arch/x86/mm/init_32.c             |   17 +++-
>  arch/x86/mm/init_64.c             |   17 +++-
>  6 files changed, 94 insertions(+), 155 deletions(-)
> 
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index 54c9787..9f6f3e6 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -45,6 +45,7 @@ extern int devmem_is_allowed(unsigned long pagenr);
>  
>  extern unsigned long max_low_pfn_mapped;
>  extern unsigned long max_pfn_mapped;
> +extern unsigned long min_pfn_mapped;
>  
>  static inline phys_addr_t get_max_mapped(void)
>  {
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index dd1a888..6991a3e 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -603,6 +603,7 @@ static inline int pgd_none(pgd_t pgd)
>  
>  extern int direct_gbpages;
>  void init_mem_mapping(void);
> +void early_alloc_pgt_buf(void);
>  
>  /* local pte updates need not use xchg for locking */
>  static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 94f922a..f7634092 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -124,6 +124,7 @@
>   */
>  unsigned long max_low_pfn_mapped;
>  unsigned long max_pfn_mapped;
> +unsigned long min_pfn_mapped;
>  
>  #ifdef CONFIG_DMI
>  RESERVE_BRK(dmi_alloc, 65536);
> @@ -900,6 +901,8 @@ void __init setup_arch(char **cmdline_p)
>  
>  	reserve_ibft_region();
>  
> +	early_alloc_pgt_buf();
> +
>  	/*
>  	 * Need to conclude brk, before memblock_x86_fill()
>  	 *  it could use memblock_find_in_range, could overlap with
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index c688ea3..2393d00 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -21,6 +21,21 @@ unsigned long __initdata pgt_buf_start;
>  unsigned long __meminitdata pgt_buf_end;
>  unsigned long __meminitdata pgt_buf_top;
>  
> +/* need 4 4k for initial PMD_SIZE, 4k for 0-ISA_END_ADDRESS */
> +#define INIT_PGT_BUF_SIZE	(5 * PAGE_SIZE)
> +RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
> +void  __init early_alloc_pgt_buf(void)
> +{
> +	unsigned long tables = INIT_PGT_BUF_SIZE;
> +	phys_addr_t base;
> +
> +	base = __pa(extend_brk(tables, PAGE_SIZE));
> +
> +	pgt_buf_start = base >> PAGE_SHIFT;
> +	pgt_buf_end = pgt_buf_start;
> +	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
> +}
> +
>  int after_bootmem;
>  
>  int direct_gbpages
> @@ -228,105 +243,6 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
>  	return nr_range;
>  }
>  
> -/*
> - * First calculate space needed for kernel direct mapping page tables to cover
> - * mr[0].start to mr[nr_range - 1].end, while accounting for possible 2M and 1GB
> - * pages. Then find enough contiguous space for those page tables.
> - */
> -static unsigned long __init calculate_table_space_size(unsigned long start, unsigned long end)
> -{
> -	int i;
> -	unsigned long puds = 0, pmds = 0, ptes = 0, tables;
> -	struct map_range mr[NR_RANGE_MR];
> -	int nr_range;
> -
> -	memset(mr, 0, sizeof(mr));
> -	nr_range = 0;
> -	nr_range = split_mem_range(mr, nr_range, start, end);
> -
> -	for (i = 0; i < nr_range; i++) {
> -		unsigned long range, extra;
> -
> -		range = mr[i].end - mr[i].start;
> -		puds += (range + PUD_SIZE - 1) >> PUD_SHIFT;
> -
> -		if (mr[i].page_size_mask & (1 << PG_LEVEL_1G)) {
> -			extra = range - ((range >> PUD_SHIFT) << PUD_SHIFT);
> -			pmds += (extra + PMD_SIZE - 1) >> PMD_SHIFT;
> -		} else {
> -			pmds += (range + PMD_SIZE - 1) >> PMD_SHIFT;
> -		}
> -
> -		if (mr[i].page_size_mask & (1 << PG_LEVEL_2M)) {
> -			extra = range - ((range >> PMD_SHIFT) << PMD_SHIFT);
> -#ifdef CONFIG_X86_32
> -			extra += PMD_SIZE;
> -#endif
> -			ptes += (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		} else {
> -			ptes += (range + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -		}
> -	}
> -
> -	tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
> -	tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
> -	tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
> -
> -#ifdef CONFIG_X86_32
> -	/* for fixmap */
> -	tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
> -#endif
> -
> -	return tables;
> -}
> -
> -static unsigned long __init calculate_all_table_space_size(void)
> -{
> -	unsigned long start_pfn, end_pfn;
> -	unsigned long tables;
> -	int i;
> -
> -	/* the ISA range is always mapped regardless of memory holes */
> -	tables = calculate_table_space_size(0, ISA_END_ADDRESS);
> -
> -	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> -		u64 start = start_pfn << PAGE_SHIFT;
> -		u64 end = end_pfn << PAGE_SHIFT;
> -
> -		if (end <= ISA_END_ADDRESS)
> -			continue;
> -
> -		if (start < ISA_END_ADDRESS)
> -			start = ISA_END_ADDRESS;
> -#ifdef CONFIG_X86_32
> -		/* on 32 bit, we only map up to max_low_pfn */
> -		if ((start >> PAGE_SHIFT) >= max_low_pfn)
> -			continue;
> -
> -		if ((end >> PAGE_SHIFT) > max_low_pfn)
> -			end = max_low_pfn << PAGE_SHIFT;
> -#endif
> -		tables += calculate_table_space_size(start, end);
> -	}
> -
> -	return tables;
> -}
> -
> -static void __init find_early_table_space(unsigned long start,
> -					  unsigned long good_end,
> -					  unsigned long tables)
> -{
> -	phys_addr_t base;
> -
> -	base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
> -	if (!base)
> -		panic("Cannot find space for the kernel page tables");
> -
> -	pgt_buf_start = base >> PAGE_SHIFT;
> -	pgt_buf_end = pgt_buf_start;
> -	pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
> -}
> -
>  static struct range pfn_mapped[E820_X_MAX];
>  static int nr_pfn_mapped;
>  
> @@ -391,17 +307,14 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>  }
>  
>  /*
> - * Iterate through E820 memory map and create direct mappings for only E820_RAM
> - * regions. We cannot simply create direct mappings for all pfns from
> - * [0 to max_low_pfn) and [4GB to max_pfn) because of possible memory holes in
> - * high addresses that cannot be marked as UC by fixed/variable range MTRRs.
> - * Depending on the alignment of E820 ranges, this may possibly result in using
> - * smaller size (i.e. 4K instead of 2M or 1G) page tables.
> + * would have hole in the middle or ends, and only ram parts will be mapped.


What? What is the 'would' refering to? Why remove a good comment that explains
the function. Why not just modify it a bit please?


Can you make it more obvious please?

>   */
> -static void __init init_range_memory_mapping(unsigned long range_start,
> +static unsigned long __init init_range_memory_mapping(
> +					   unsigned long range_start,
>  					   unsigned long range_end)
>  {
>  	unsigned long start_pfn, end_pfn;
> +	unsigned long mapped_ram_size = 0;
>  	int i;
>  
>  	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> @@ -421,71 +334,70 @@ static void __init init_range_memory_mapping(unsigned long range_start,
>  			end = range_end;
>  
>  		init_memory_mapping(start, end);
> +
> +		mapped_ram_size += end - start;
>  	}
> +
> +	return mapped_ram_size;
>  }
>  
> +/* (PUD_SHIFT-PMD_SHIFT)/2 */
> +#define STEP_SIZE_SHIFT 5
>  void __init init_mem_mapping(void)
>  {
> -	unsigned long tables, good_end, end;
> +	unsigned long end, real_end, start, last_start;
> +	unsigned long step_size;
> +	unsigned long addr;
> +	unsigned long mapped_ram_size = 0;
> +	unsigned long new_mapped_ram_size;
>  
>  	probe_page_size_mask();
>  
> -	/*
> -	 * Find space for the kernel direct mapping tables.
> -	 *
> -	 * Later we should allocate these tables in the local node of the
> -	 * memory mapped. Unfortunately this is done currently before the
> -	 * nodes are discovered.
> -	 */
>  #ifdef CONFIG_X86_64
>  	end = max_pfn << PAGE_SHIFT;
> -	good_end = end;
>  #else
>  	end = max_low_pfn << PAGE_SHIFT;
> -	good_end = max_pfn_mapped << PAGE_SHIFT;
>  #endif
> -	tables = calculate_all_table_space_size();
> -	find_early_table_space(0, good_end, tables);
> -	printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
> -		end - 1, pgt_buf_start << PAGE_SHIFT,
> -		(pgt_buf_top << PAGE_SHIFT) - 1);
>  
> -	max_pfn_mapped = 0; /* will get exact value next */
>  	/* the ISA range is always mapped regardless of memory holes */
>  	init_memory_mapping(0, ISA_END_ADDRESS);
> -	init_range_memory_mapping(ISA_END_ADDRESS, end);
> +
> +	/* xen has big range in reserved near end of ram, skip it at first */

I am not seeing the logic for doing it? The loop is quite generic
in doing it in reverse order, and the memblock_find_in_range
gets a nice PMD_SIZE region from the end of the memory.

If the memory at the end is reserved, then it looks like it won't
be even considered in the loop, but it does get included in the fallback:

	if (real_end < end)
		init_range_memory_mapping(real_end, end);



> +	addr = memblock_find_in_range(ISA_END_ADDRESS, end, PMD_SIZE,
> +			 PAGE_SIZE);
> +	real_end = addr + PMD_SIZE;
> +
> +	/* step_size need to be small so pgt_buf from BRK could cover it */
> +	step_size = PMD_SIZE;
> +	max_pfn_mapped = 0; /* will get exact value next */
> +	min_pfn_mapped = real_end >> PAGE_SHIFT;
> +	last_start = start = real_end;

Everytime I look at this loop, I keep on forgetting that it goes in reverse.
I am not sure if it is just me, but it might be useful for other
folks who are going to look at this in a year or so to have
a little hint:

N.B. We start from the top (end of memory) and go to the bottom. The
memblock_find_in_range gets us a block of RAM from the end
of RAM.

> +	while (last_start > ISA_END_ADDRESS) {
> +		if (last_start > step_size) {
> +			start = round_down(last_start - 1, step_size);
> +			if (start < ISA_END_ADDRESS)
> +				start = ISA_END_ADDRESS;
> +		} else
> +			start = ISA_END_ADDRESS;
> +		new_mapped_ram_size = init_range_memory_mapping(start,
> +							last_start);
> +		last_start = start;
> +		min_pfn_mapped = last_start >> PAGE_SHIFT;
> +		/* only increase step_size after big range get mapped */
> +		if (new_mapped_ram_size > mapped_ram_size)
> +			step_size <<= STEP_SIZE_SHIFT;
> +		mapped_ram_size += new_mapped_ram_size;
> +	}
> +
> +	if (real_end < end)
> +		init_range_memory_mapping(real_end, end);
> +
>  #ifdef CONFIG_X86_64
>  	if (max_pfn > max_low_pfn) {
>  		/* can we preseve max_low_pfn ?*/
>  		max_low_pfn = max_pfn;
>  	}
>  #endif
> -	/*
> -	 * Reserve the kernel pagetable pages we used (pgt_buf_start -
> -	 * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> -	 * so that they can be reused for other purposes.
> -	 *
> -	 * On native it just means calling memblock_reserve, on Xen it also
> -	 * means marking RW the pagetable pages that we allocated before
> -	 * but that haven't been used.
> -	 *
> -	 * In fact on xen we mark RO the whole range pgt_buf_start -
> -	 * pgt_buf_top, because we have to make sure that when
> -	 * init_memory_mapping reaches the pagetable pages area, it maps
> -	 * RO all the pagetable pages, including the ones that are beyond
> -	 * pgt_buf_end at that time.
> -	 */
> -	if (pgt_buf_end > pgt_buf_start) {
> -		printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] final\n",
> -			end - 1, pgt_buf_start << PAGE_SHIFT,
> -			(pgt_buf_end << PAGE_SHIFT) - 1);
> -		x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
> -				PFN_PHYS(pgt_buf_end));
> -	}
> -
> -	/* stop the wrong using */
> -	pgt_buf_top = 0;
> -
>  	early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
>  }
>  
> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 27f7fc6..7bb1106 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -61,11 +61,22 @@ bool __read_mostly __vmalloc_start_set = false;
>  
>  static __init void *alloc_low_page(void)
>  {
> -	unsigned long pfn = pgt_buf_end++;
> +	unsigned long pfn;
>  	void *adr;
>  
> -	if (pfn >= pgt_buf_top)
> -		panic("alloc_low_page: ran out of memory");
> +	if ((pgt_buf_end + 1) >= pgt_buf_top) {
> +		unsigned long ret;
> +		if (min_pfn_mapped >= max_pfn_mapped)
> +			panic("alloc_low_page: ran out of memory");
> +		ret = memblock_find_in_range(min_pfn_mapped << PAGE_SHIFT,
> +					max_pfn_mapped << PAGE_SHIFT,
> +					PAGE_SIZE, PAGE_SIZE);
> +		if (!ret)
> +			panic("alloc_low_page: can not alloc memory");
> +		memblock_reserve(ret, PAGE_SIZE);
> +		pfn = ret >> PAGE_SHIFT;
> +	} else
> +		pfn = pgt_buf_end++;
>  
>  	adr = __va(pfn * PAGE_SIZE);
>  	clear_page(adr);
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index fa28e3e..eefaea6 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -316,7 +316,7 @@ void __init cleanup_highmap(void)
>  
>  static __ref void *alloc_low_page(unsigned long *phys)
>  {
> -	unsigned long pfn = pgt_buf_end++;
> +	unsigned long pfn;
>  	void *adr;
>  
>  	if (after_bootmem) {
> @@ -326,8 +326,19 @@ static __ref void *alloc_low_page(unsigned long *phys)
>  		return adr;
>  	}
>  
> -	if (pfn >= pgt_buf_top)
> -		panic("alloc_low_page: ran out of memory");
> +	if ((pgt_buf_end + 1) >= pgt_buf_top) {
> +		unsigned long ret;
> +		if (min_pfn_mapped >= max_pfn_mapped)
> +			panic("alloc_low_page: ran out of memory");
> +		ret = memblock_find_in_range(min_pfn_mapped << PAGE_SHIFT,
> +					max_pfn_mapped << PAGE_SHIFT,
> +					PAGE_SIZE, PAGE_SIZE);
> +		if (!ret)
> +			panic("alloc_low_page: can not alloc memory");
> +		memblock_reserve(ret, PAGE_SIZE);
> +		pfn = ret >> PAGE_SHIFT;
> +	} else
> +		pfn = pgt_buf_end++;
>  
>  	adr = early_memremap(pfn * PAGE_SIZE, PAGE_SIZE);
>  	clear_page(adr);
> -- 
> 1.7.7

  parent reply	other threads:[~2012-11-28 17:51 UTC|newest]

Thread overview: 119+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-17  3:38 [PATCH v8 00/46] x86, mm: map ram from top-down with BRK and memblock Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 01/46] x86, mm: Add global page_size_mask and probe one time only Yinghai Lu
2012-11-22  1:38   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 02/46] x86, mm: Split out split_mem_range from init_memory_mapping Yinghai Lu
2012-11-22  1:39   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 03/46] x86, mm: Move down find_early_table_space() Yinghai Lu
2012-11-22  1:40   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 03/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 04/46] x86, mm: Move init_memory_mapping calling out of setup.c Yinghai Lu
2012-11-22  1:41   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 04/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 05/46] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
2012-11-22  1:42   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 06/46] x86, mm: Change find_early_table_space() paramters Yinghai Lu
2012-11-22  1:43   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 06/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 07/46] x86, mm: Find early page table buffer together Yinghai Lu
2012-11-22  1:44   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:50   ` [PATCH v8 07/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 08/46] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
2012-11-22  1:45   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 16:59   ` [PATCH v8 08/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 09/46] x86, mm: Set memblock initial limit to 1M Yinghai Lu
2012-11-22  1:46   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 10/46] x86, mm: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
2012-11-22  1:47   ` [tip:x86/mm2] " tip-bot for Jacob Shin
2012-11-17  3:38 ` [PATCH v8 11/46] x86, mm: Fixup code testing if a pfn is direct mapped Yinghai Lu
2012-11-22  1:48   ` [tip:x86/mm2] " tip-bot for Jacob Shin
2012-11-17  3:38 ` [PATCH v8 12/46] x86, mm: use pfn_range_is_mapped() with CPA Yinghai Lu
2012-11-22  1:49   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:06   ` [PATCH v8 12/46] " Konrad Rzeszutek Wilk
2012-11-28 19:33     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 13/46] x86, mm: use pfn_range_is_mapped() with gart Yinghai Lu
2012-11-22  1:50   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:07   ` [PATCH v8 13/46] " Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 14/46] x86, mm: use pfn_range_is_mapped() with reserve_initrd Yinghai Lu
2012-11-22  1:51   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:08   ` [PATCH v8 14/46] " Konrad Rzeszutek Wilk
2012-11-28 19:40     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 15/46] x86, mm: Only direct map addresses that are marked as E820_RAM Yinghai Lu
2012-11-22  1:53   ` [tip:x86/mm2] " tip-bot for Jacob Shin
2012-11-28 17:15   ` [PATCH v8 15/46] " Konrad Rzeszutek Wilk
2012-11-28 19:43     ` Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 16/46] x86, mm: relocate initrd under all mem for 64bit Yinghai Lu
2012-11-22  1:54   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 17/46] x86, mm: Align start address to correct big page size Yinghai Lu
2012-11-22  1:55   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 18/46] x86, mm: Use big page size for small memory range Yinghai Lu
2012-11-22  1:56   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 19/46] x86, mm: Don't clear page table if range is ram Yinghai Lu
2012-11-22  1:57   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 20/46] x86, mm: Break down init_all_memory_mapping Yinghai Lu
2012-11-22  1:58   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:38 ` [PATCH v8 21/46] x86, mm: setup page table in top-down Yinghai Lu
2012-11-22  1:59   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 17:50   ` Konrad Rzeszutek Wilk [this message]
2012-11-28 20:16     ` [PATCH v8 21/46] " Yinghai Lu
2012-12-05 21:53       ` Konrad Rzeszutek Wilk
2012-11-17  3:38 ` [PATCH v8 22/46] x86, mm: Remove early_memremap workaround for page table accessing on 64bit Yinghai Lu
2012-11-22  2:00   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 18:57   ` [PATCH v8 22/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 23/46] x86, mm: Remove parameter in alloc_low_page for 64bit Yinghai Lu
2012-11-22  2:01   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 18:57   ` [PATCH v8 23/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 24/46] x86, mm: Merge alloc_low_page between 64bit and 32bit Yinghai Lu
2012-11-22  2:02   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 25/46] x86, mm: Move min_pfn_mapped back to mm/init.c Yinghai Lu
2012-11-22  2:03   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 26/46] x86, mm, Xen: Remove mapping_pagetable_reserve() Yinghai Lu
2012-11-22  2:04   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 27/46] x86, mm: Add alloc_low_pages(num) Yinghai Lu
2012-11-22  2:05   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 28/46] x86, mm: Add pointer about Xen mmu requirement for alloc_low_pages Yinghai Lu
2012-11-22  2:06   ` [tip:x86/mm2] " tip-bot for Stefano Stabellini
2012-11-17  3:39 ` [PATCH v8 29/46] x86, mm: only call early_ioremap_page_table_range_init() once Yinghai Lu
2012-11-22  2:07   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:02   ` [PATCH v8 29/46] " Konrad Rzeszutek Wilk
2012-11-17  3:39 ` [PATCH v8 30/46] x86, mm: Move back pgt_buf_* to mm/init.c Yinghai Lu
2012-11-22  2:08   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 31/46] x86, mm: Move init_gbpages() out of setup.c Yinghai Lu
2012-11-22  2:09   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 32/46] x86, mm: change low/hignmem_pfn_init to static on 32bit Yinghai Lu
2012-11-22  2:10   ` [tip:x86/mm2] x86, mm: change low/ hignmem_pfn_init " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 33/46] x86, mm: Move function declaration into mm_internal.h Yinghai Lu
2012-11-22  2:11   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 34/46] x86, mm: Add check before clear pte above max_low_pfn on 32bit Yinghai Lu
2012-11-22  2:13   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:09   ` [PATCH v8 34/46] " Konrad Rzeszutek Wilk
2012-11-28 20:38     ` Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 35/46] x86, mm: use round_up/down in split_mem_range() Yinghai Lu
2012-11-22  2:14   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 36/46] x86, mm: use PFN_DOWN " Yinghai Lu
2012-11-22  2:15   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 37/46] x86, mm: use pfn instead of pos in split_mem_range Yinghai Lu
2012-11-22  2:16   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 38/46] x86, mm: use limit_pfn for end pfn Yinghai Lu
2012-11-22  2:17   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 39/46] x86, mm: Unifying after_bootmem for 32bit and 64bit Yinghai Lu
2012-11-22  2:18   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 40/46] x86, mm: Move after_bootmem to mm_internel.h Yinghai Lu
2012-11-22  2:19   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 41/46] x86, mm: Use clamp_t() in init_range_memory_mapping Yinghai Lu
2012-11-22  2:20   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 42/46] x86, mm: kill numa_free_all_bootmem() Yinghai Lu
2012-11-22  2:21   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 43/46] x86, mm: kill numa_64.h Yinghai Lu
2012-11-22  2:22   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 44/46] sparc, mm: Remove calling of free_all_bootmem_node() Yinghai Lu
2012-11-22  2:23   ` [tip:x86/mm2] sparc, mm: Remove calling of free_all_bootmem_node( ) tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 45/46] mm: Kill NO_BOOTMEM version free_all_bootmem_node() Yinghai Lu
2012-11-22  2:24   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-17  3:39 ` [PATCH v8 46/46] x86, mm: Let "memmap=" take more entries one time Yinghai Lu
2012-11-22  2:25   ` [tip:x86/mm2] " tip-bot for Yinghai Lu
2012-11-28 19:12   ` [PATCH v8 46/46] " Konrad Rzeszutek Wilk
2012-11-27 21:17 ` [PATCH v8 00/46] x86, mm: map ram from top-down with BRK and memblock Konrad Rzeszutek Wilk
2012-11-28 19:35   ` Konrad Rzeszutek Wilk
2012-11-28 19:47     ` Yinghai Lu
2012-11-28 20:57       ` Konrad Rzeszutek Wilk
2012-11-28 21:06         ` Yinghai Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121128175007.GN21266@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=hpa@zytor.com \
    --cc=jacob.shin@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=tglx@linutronix.de \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).