All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: Yinghai Lu <yinghai@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
	"H. Peter Anvin" <hpa@zytor.com>, Jacob Shin <jacob.shin@amd.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 21/46] x86, mm: setup page table in top-down
Date: Tue, 13 Nov 2012 17:26:20 +0000	[thread overview]
Message-ID: <alpine.DEB.2.02.1211131707380.28049@kaball.uk.xensource.com> (raw)
In-Reply-To: <1352755122-25660-22-git-send-email-yinghai@kernel.org>

On Mon, 12 Nov 2012, Yinghai Lu wrote:
> Get pgt_buf early from BRK, and use it to map PMD_SIZE from top at first.
> Then use mapped pages to map more ranges below, and keep looping until
> all pages get mapped.
> 
> alloc_low_page will use page from BRK at first, after that buffer is used
> up, will use memblock to find and reserve pages for page table usage.
> 
> Introduce min_pfn_mapped to make sure find new pages from mapped ranges,
> that will be updated when lower pages get mapped.
> 
> Also add step_size to make sure that don't try to map too big range with
> limited mapped pages initially, and increase the step_size when we have
> more mapped pages on hand.
> 
> At last we can get rid of calculation and find early pgt related code.
> 
> -v2: update to after fix_xen change,
>      also use MACRO for initial pgt_buf size and add comments with it.
> -v3: skip big reserved range in memblock.reserved near end.
> -v4: don't need fix_xen change now.
> 
> Suggested-by: "H. Peter Anvin" <hpa@zytor.com>
> Signed-off-by: Yinghai Lu <yinghai@kernel.org>

The changes to alloc_low_page and early_alloc_pgt_buf look OK to me.

The changes to init_mem_mapping are a bit iffy but they aren't too
unreasonable.
Overall the patch is OK even though I would certainly appreciate more
comments and better variable names (real_end?), see below.


>  arch/x86/include/asm/page_types.h |    1 +
>  arch/x86/include/asm/pgtable.h    |    1 +
>  arch/x86/kernel/setup.c           |    3 +
>  arch/x86/mm/init.c                |  210 +++++++++++--------------------------
>  arch/x86/mm/init_32.c             |   17 +++-
>  arch/x86/mm/init_64.c             |   17 +++-
>  6 files changed, 94 insertions(+), 155 deletions(-)
> 
> diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
> index 54c9787..9f6f3e6 100644
> --- a/arch/x86/include/asm/page_types.h
> +++ b/arch/x86/include/asm/page_types.h
> @@ -45,6 +45,7 @@ extern int devmem_is_allowed(unsigned long pagenr);
> 
>  extern unsigned long max_low_pfn_mapped;
>  extern unsigned long max_pfn_mapped;
> +extern unsigned long min_pfn_mapped;
> 
>  static inline phys_addr_t get_max_mapped(void)
>  {
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index dd1a888..6991a3e 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -603,6 +603,7 @@ static inline int pgd_none(pgd_t pgd)
> 
>  extern int direct_gbpages;
>  void init_mem_mapping(void);
> +void early_alloc_pgt_buf(void);
> 
>  /* local pte updates need not use xchg for locking */
>  static inline pte_t native_local_ptep_get_and_clear(pte_t *ptep)
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 94f922a..f7634092 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -124,6 +124,7 @@
>   */
>  unsigned long max_low_pfn_mapped;
>  unsigned long max_pfn_mapped;
> +unsigned long min_pfn_mapped;
> 
>  #ifdef CONFIG_DMI
>  RESERVE_BRK(dmi_alloc, 65536);
> @@ -900,6 +901,8 @@ void __init setup_arch(char **cmdline_p)
> 
>         reserve_ibft_region();
> 
> +       early_alloc_pgt_buf();
> +
>         /*
>          * Need to conclude brk, before memblock_x86_fill()
>          *  it could use memblock_find_in_range, could overlap with
> diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
> index 47a1ba2..76a6e82 100644
> --- a/arch/x86/mm/init.c
> +++ b/arch/x86/mm/init.c
> @@ -21,6 +21,21 @@ unsigned long __initdata pgt_buf_start;
>  unsigned long __meminitdata pgt_buf_end;
>  unsigned long __meminitdata pgt_buf_top;
> 
> +/* need 4 4k for initial PMD_SIZE, 4k for 0-ISA_END_ADDRESS */
> +#define INIT_PGT_BUF_SIZE      (5 * PAGE_SIZE)
> +RESERVE_BRK(early_pgt_alloc, INIT_PGT_BUF_SIZE);
> +void  __init early_alloc_pgt_buf(void)
> +{
> +       unsigned long tables = INIT_PGT_BUF_SIZE;
> +       phys_addr_t base;
> +
> +       base = __pa(extend_brk(tables, PAGE_SIZE));
> +
> +       pgt_buf_start = base >> PAGE_SHIFT;
> +       pgt_buf_end = pgt_buf_start;
> +       pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
> +}
> +
>  int after_bootmem;
> 
>  int direct_gbpages
> @@ -228,105 +243,6 @@ static int __meminit split_mem_range(struct map_range *mr, int nr_range,
>         return nr_range;
>  }
> 
> -/*
> - * First calculate space needed for kernel direct mapping page tables to cover
> - * mr[0].start to mr[nr_range - 1].end, while accounting for possible 2M and 1GB
> - * pages. Then find enough contiguous space for those page tables.
> - */
> -static unsigned long __init calculate_table_space_size(unsigned long start, unsigned long end)
> -{
> -       int i;
> -       unsigned long puds = 0, pmds = 0, ptes = 0, tables;
> -       struct map_range mr[NR_RANGE_MR];
> -       int nr_range;
> -
> -       memset(mr, 0, sizeof(mr));
> -       nr_range = 0;
> -       nr_range = split_mem_range(mr, nr_range, start, end);
> -
> -       for (i = 0; i < nr_range; i++) {
> -               unsigned long range, extra;
> -
> -               range = mr[i].end - mr[i].start;
> -               puds += (range + PUD_SIZE - 1) >> PUD_SHIFT;
> -
> -               if (mr[i].page_size_mask & (1 << PG_LEVEL_1G)) {
> -                       extra = range - ((range >> PUD_SHIFT) << PUD_SHIFT);
> -                       pmds += (extra + PMD_SIZE - 1) >> PMD_SHIFT;
> -               } else {
> -                       pmds += (range + PMD_SIZE - 1) >> PMD_SHIFT;
> -               }
> -
> -               if (mr[i].page_size_mask & (1 << PG_LEVEL_2M)) {
> -                       extra = range - ((range >> PMD_SHIFT) << PMD_SHIFT);
> -#ifdef CONFIG_X86_32
> -                       extra += PMD_SIZE;
> -#endif
> -                       ptes += (extra + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -               } else {
> -                       ptes += (range + PAGE_SIZE - 1) >> PAGE_SHIFT;
> -               }
> -       }
> -
> -       tables = roundup(puds * sizeof(pud_t), PAGE_SIZE);
> -       tables += roundup(pmds * sizeof(pmd_t), PAGE_SIZE);
> -       tables += roundup(ptes * sizeof(pte_t), PAGE_SIZE);
> -
> -#ifdef CONFIG_X86_32
> -       /* for fixmap */
> -       tables += roundup(__end_of_fixed_addresses * sizeof(pte_t), PAGE_SIZE);
> -#endif
> -
> -       return tables;
> -}
> -
> -static unsigned long __init calculate_all_table_space_size(void)
> -{
> -       unsigned long start_pfn, end_pfn;
> -       unsigned long tables;
> -       int i;
> -
> -       /* the ISA range is always mapped regardless of memory holes */
> -       tables = calculate_table_space_size(0, ISA_END_ADDRESS);
> -
> -       for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> -               u64 start = start_pfn << PAGE_SHIFT;
> -               u64 end = end_pfn << PAGE_SHIFT;
> -
> -               if (end <= ISA_END_ADDRESS)
> -                       continue;
> -
> -               if (start < ISA_END_ADDRESS)
> -                       start = ISA_END_ADDRESS;
> -#ifdef CONFIG_X86_32
> -               /* on 32 bit, we only map up to max_low_pfn */
> -               if ((start >> PAGE_SHIFT) >= max_low_pfn)
> -                       continue;
> -
> -               if ((end >> PAGE_SHIFT) > max_low_pfn)
> -                       end = max_low_pfn << PAGE_SHIFT;
> -#endif
> -               tables += calculate_table_space_size(start, end);
> -       }
> -
> -       return tables;
> -}
> -
> -static void __init find_early_table_space(unsigned long start,
> -                                         unsigned long good_end,
> -                                         unsigned long tables)
> -{
> -       phys_addr_t base;
> -
> -       base = memblock_find_in_range(start, good_end, tables, PAGE_SIZE);
> -       if (!base)
> -               panic("Cannot find space for the kernel page tables");
> -
> -       pgt_buf_start = base >> PAGE_SHIFT;
> -       pgt_buf_end = pgt_buf_start;
> -       pgt_buf_top = pgt_buf_start + (tables >> PAGE_SHIFT);
> -}
> -
>  static struct range pfn_mapped[E820_X_MAX];
>  static int nr_pfn_mapped;
> 
> @@ -392,17 +308,14 @@ unsigned long __init_refok init_memory_mapping(unsigned long start,
>  }
> 
>  /*
> - * Iterate through E820 memory map and create direct mappings for only E820_RAM
> - * regions. We cannot simply create direct mappings for all pfns from
> - * [0 to max_low_pfn) and [4GB to max_pfn) because of possible memory holes in
> - * high addresses that cannot be marked as UC by fixed/variable range MTRRs.
> - * Depending on the alignment of E820 ranges, this may possibly result in using
> - * smaller size (i.e. 4K instead of 2M or 1G) page tables.
> + * this one could take range with hole in it.
>   */

this comment in particular is not very satisfactory


> -static void __init init_range_memory_mapping(unsigned long range_start,
> +static unsigned long __init init_range_memory_mapping(
> +                                          unsigned long range_start,
>                                            unsigned long range_end)
>  {
>         unsigned long start_pfn, end_pfn;
> +       unsigned long mapped_ram_size = 0;
>         int i;
> 
>         for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> @@ -422,71 +335,70 @@ static void __init init_range_memory_mapping(unsigned long range_start,
>                         end = range_end;
> 
>                 init_memory_mapping(start, end);
> +
> +               mapped_ram_size += end - start;
>         }
> +
> +       return mapped_ram_size;
>  }
> 
> +/* (PUD_SHIFT-PMD_SHIFT)/2 */
> +#define STEP_SIZE_SHIFT 5
>  void __init init_mem_mapping(void)
>  {
> -       unsigned long tables, good_end, end;
> +       unsigned long end, real_end, start, last_start;
> +       unsigned long step_size;
> +       unsigned long addr;
> +       unsigned long mapped_ram_size = 0;
> +       unsigned long new_mapped_ram_size;
> 
>         probe_page_size_mask();
> 
> -       /*
> -        * Find space for the kernel direct mapping tables.
> -        *
> -        * Later we should allocate these tables in the local node of the
> -        * memory mapped. Unfortunately this is done currently before the
> -        * nodes are discovered.
> -        */
>  #ifdef CONFIG_X86_64
>         end = max_pfn << PAGE_SHIFT;
> -       good_end = end;
>  #else
>         end = max_low_pfn << PAGE_SHIFT;
> -       good_end = max_pfn_mapped << PAGE_SHIFT;
>  #endif
> -       tables = calculate_all_table_space_size();
> -       find_early_table_space(0, good_end, tables);
> -       printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] prealloc\n",
> -               end - 1, pgt_buf_start << PAGE_SHIFT,
> -               (pgt_buf_top << PAGE_SHIFT) - 1);
> 
> -       max_pfn_mapped = 0; /* will get exact value next */
>         /* the ISA range is always mapped regardless of memory holes */
>         init_memory_mapping(0, ISA_END_ADDRESS);
> -       init_range_memory_mapping(ISA_END_ADDRESS, end);
> +
> +       /* xen has big range in reserved near end of ram, skip it at first */
> +       addr = memblock_find_in_range(ISA_END_ADDRESS, end, PMD_SIZE,
> +                        PAGE_SIZE);
> +       real_end = addr + PMD_SIZE;
> +
> +       /* step_size need to be small so pgt_buf from BRK could cover it */
> +       step_size = PMD_SIZE;
> +       max_pfn_mapped = 0; /* will get exact value next */
> +       min_pfn_mapped = real_end >> PAGE_SHIFT;
> +       last_start = start = real_end;
> +       while (last_start > ISA_END_ADDRESS) {
> +               if (last_start > step_size) {
> +                       start = round_down(last_start - 1, step_size);
> +                       if (start < ISA_END_ADDRESS)
> +                               start = ISA_END_ADDRESS;
> +               } else
> +                       start = ISA_END_ADDRESS;
> +               new_mapped_ram_size = init_range_memory_mapping(start,
> +                                                       last_start);
> +               last_start = start;
> +               min_pfn_mapped = last_start >> PAGE_SHIFT;
> +               /* only increase step_size after big range get mapped */
> +               if (new_mapped_ram_size > mapped_ram_size)
> +                       step_size <<= STEP_SIZE_SHIFT;
> +               mapped_ram_size += new_mapped_ram_size;
> +       }
> +
> +       if (real_end < end)
> +               init_range_memory_mapping(real_end, end);
> +
>  #ifdef CONFIG_X86_64
>         if (max_pfn > max_low_pfn) {
>                 /* can we preseve max_low_pfn ?*/
>                 max_low_pfn = max_pfn;
>         }
>  #endif
> -       /*
> -        * Reserve the kernel pagetable pages we used (pgt_buf_start -
> -        * pgt_buf_end) and free the other ones (pgt_buf_end - pgt_buf_top)
> -        * so that they can be reused for other purposes.
> -        *
> -        * On native it just means calling memblock_reserve, on Xen it also
> -        * means marking RW the pagetable pages that we allocated before
> -        * but that haven't been used.
> -        *
> -        * In fact on xen we mark RO the whole range pgt_buf_start -
> -        * pgt_buf_top, because we have to make sure that when
> -        * init_memory_mapping reaches the pagetable pages area, it maps
> -        * RO all the pagetable pages, including the ones that are beyond
> -        * pgt_buf_end at that time.
> -        */
> -       if (pgt_buf_end > pgt_buf_start) {
> -               printk(KERN_DEBUG "kernel direct mapping tables up to %#lx @ [mem %#010lx-%#010lx] final\n",
> -                       end - 1, pgt_buf_start << PAGE_SHIFT,
> -                       (pgt_buf_end << PAGE_SHIFT) - 1);
> -               x86_init.mapping.pagetable_reserve(PFN_PHYS(pgt_buf_start),
> -                               PFN_PHYS(pgt_buf_end));
> -       }
> -
> -       /* stop the wrong using */
> -       pgt_buf_top = 0;
> -
>         early_memtest(0, max_pfn_mapped << PAGE_SHIFT);
>  }

you should say why we don't need to call pagetable_reserve anymore: is
it because alloc_low_page is going to reserve each page that it
allocates?


> diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
> index 27f7fc6..7bb1106 100644
> --- a/arch/x86/mm/init_32.c
> +++ b/arch/x86/mm/init_32.c
> @@ -61,11 +61,22 @@ bool __read_mostly __vmalloc_start_set = false;
> 
>  static __init void *alloc_low_page(void)
>  {
> -       unsigned long pfn = pgt_buf_end++;
> +       unsigned long pfn;
>         void *adr;
> 
> -       if (pfn >= pgt_buf_top)
> -               panic("alloc_low_page: ran out of memory");
> +       if ((pgt_buf_end + 1) >= pgt_buf_top) {
> +               unsigned long ret;
> +               if (min_pfn_mapped >= max_pfn_mapped)
> +                       panic("alloc_low_page: ran out of memory");
> +               ret = memblock_find_in_range(min_pfn_mapped << PAGE_SHIFT,
> +                                       max_pfn_mapped << PAGE_SHIFT,
> +                                       PAGE_SIZE, PAGE_SIZE);
> +               if (!ret)
> +                       panic("alloc_low_page: can not alloc memory");
> +               memblock_reserve(ret, PAGE_SIZE);
> +               pfn = ret >> PAGE_SHIFT;
> +       } else
> +               pfn = pgt_buf_end++;
> 
>         adr = __va(pfn * PAGE_SIZE);
>         clear_page(adr);
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index fa28e3e..eefaea6 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -316,7 +316,7 @@ void __init cleanup_highmap(void)
> 
>  static __ref void *alloc_low_page(unsigned long *phys)
>  {
> -       unsigned long pfn = pgt_buf_end++;
> +       unsigned long pfn;
>         void *adr;
> 
>         if (after_bootmem) {
> @@ -326,8 +326,19 @@ static __ref void *alloc_low_page(unsigned long *phys)
>                 return adr;
>         }
> 
> -       if (pfn >= pgt_buf_top)
> -               panic("alloc_low_page: ran out of memory");
> +       if ((pgt_buf_end + 1) >= pgt_buf_top) {
> +               unsigned long ret;
> +               if (min_pfn_mapped >= max_pfn_mapped)
> +                       panic("alloc_low_page: ran out of memory");
> +               ret = memblock_find_in_range(min_pfn_mapped << PAGE_SHIFT,
> +                                       max_pfn_mapped << PAGE_SHIFT,
> +                                       PAGE_SIZE, PAGE_SIZE);
> +               if (!ret)
> +                       panic("alloc_low_page: can not alloc memory");
> +               memblock_reserve(ret, PAGE_SIZE);
> +               pfn = ret >> PAGE_SHIFT;
> +       } else
> +               pfn = pgt_buf_end++;
> 
>         adr = early_memremap(pfn * PAGE_SIZE, PAGE_SIZE);
>         clear_page(adr);
> --
> 1.7.7
> 

  reply	other threads:[~2012-11-13 17:26 UTC|newest]

Thread overview: 115+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-09 23:58 [PATCH -v3 0/7] x86: Use BRK to pre mapping page table to make xen happy Yinghai Lu
2012-10-09 23:58 ` [PATCH 1/7] x86, mm: align start address to correct big page size Yinghai Lu
2012-10-09 23:58 ` [PATCH 2/7] x86, mm: Use big page size for small memory range Yinghai Lu
2012-10-09 23:58 ` [PATCH 3/7] x86, mm: Don't clear page table if next range is ram Yinghai Lu
2012-10-09 23:58 ` [PATCH 4/7] x86, mm: only keep initial mapping for ram Yinghai Lu
2012-10-10 13:48   ` Konrad Rzeszutek Wilk
2012-10-10 14:59     ` Yinghai Lu
2012-10-09 23:58 ` [PATCH 5/7] x86, mm: Break down init_all_memory_mapping Yinghai Lu
2012-10-10 13:55   ` Konrad Rzeszutek Wilk
2012-10-10 15:42     ` Yinghai Lu
2012-10-09 23:58 ` [PATCH 6/7] x86, mm: setup page table from top-down Yinghai Lu
2012-10-10  1:53   ` Yinghai Lu
2012-10-10 16:38   ` Stefano Stabellini
2012-10-10 17:07     ` Yinghai Lu
2012-10-10 17:26       ` Stefano Stabellini
2012-10-10 17:38         ` Yinghai Lu
2012-11-16 17:14           ` H. Peter Anvin
2012-11-16 17:16             ` Yinghai Lu
2012-10-09 23:58 ` [PATCH 7/7] x86, mm: Remove early_memremap workaround for page table accessing Yinghai Lu
2012-10-10 13:47 ` [PATCH -v3 0/7] x86: Use BRK to pre mapping page table to make xen happy Konrad Rzeszutek Wilk
2012-10-10 14:55   ` Yinghai Lu
2012-10-10 16:40 ` Stefano Stabellini
2012-10-11  6:13   ` Yinghai Lu
2012-10-11 23:04     ` Yinghai Lu
2012-10-18 16:17     ` Stefano Stabellini
2012-10-18 16:26       ` Jacob Shin
2012-10-18 16:57         ` Stefano Stabellini
2012-10-18 20:36           ` Yinghai Lu
2012-10-18 20:40         ` Yinghai Lu
2012-10-18 21:57           ` Jacob Shin
2012-10-18 20:43       ` Yinghai Lu
2012-10-30 13:44     ` Konrad Rzeszutek Wilk
2012-10-30 14:47       ` Yinghai Lu
2012-11-03 21:35         ` Yinghai Lu
2012-11-03 21:37           ` H. Peter Anvin
2012-11-05 20:25             ` Yinghai Lu
2012-11-05 20:27               ` [PATCH 39/42] x86, mm: kill numa_free_all_bootmem() Yinghai Lu
2012-11-05 20:27                 ` [PATCH 40/42] x86, mm: kill numa_64.h Yinghai Lu
2012-11-05 20:27                 ` [PATCH 41/42] sparc, mm: Remove calling of free_all_bootmem_node() Yinghai Lu
2012-11-05 20:27                   ` Yinghai Lu
2012-11-06 17:44                   ` David Miller
2012-11-06 17:44                     ` David Miller
2012-11-05 20:27                 ` [PATCH 42/42] mm: Kill NO_BOOTMEM version free_all_bootmem_node() Yinghai Lu
2012-11-07 16:11               ` [PATCH -v3 0/7] x86: Use BRK to pre mapping page table to make xen happy Konrad Rzeszutek Wilk
2012-11-08  1:40                 ` Konrad Rzeszutek Wilk
2012-11-08  4:06                   ` Yinghai Lu
2012-11-09 20:31                     ` Yinghai Lu
2012-11-12 19:30           ` Konrad Rzeszutek Wilk
2012-11-12 21:17             ` [PATCH v7 00/46] x86, mm: map ram from top-down with BRK and memblock Yinghai Lu
2012-11-12 21:17               ` [PATCH 01/46] x86, mm: Add global page_size_mask and probe one time only Yinghai Lu
2012-11-12 21:17               ` [PATCH 02/46] x86, mm: Split out split_mem_range from init_memory_mapping Yinghai Lu
2012-11-13  5:51                 ` Yasuaki Ishimatsu
2012-11-13  6:20                   ` Yinghai Lu
2012-11-13  7:03                     ` snakky.zhang
2012-11-13 18:34                       ` Yinghai Lu
2012-11-12 21:17               ` [PATCH 03/46] x86, mm: Move down find_early_table_space() Yinghai Lu
2012-11-12 21:18               ` [PATCH 04/46] x86, mm: Move init_memory_mapping calling out of setup.c Yinghai Lu
2012-11-12 21:18               ` [PATCH 05/46] x86, mm: Revert back good_end setting for 64bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 06/46] x86, mm: Change find_early_table_space() paramters Yinghai Lu
2012-11-12 21:18               ` [PATCH 07/46] x86, mm: Find early page table buffer together Yinghai Lu
2012-11-12 21:18               ` [PATCH 08/46] x86, mm: Separate out calculate_table_space_size() Yinghai Lu
2012-11-12 21:18               ` [PATCH 09/46] x86, mm: Set memblock initial limit to 1M Yinghai Lu
2012-11-12 21:18               ` [PATCH 10/46] x86, mm: if kernel .text .data .bss are not marked as E820_RAM, complain and fix Yinghai Lu
2012-11-12 21:18               ` [PATCH 11/46] x86, mm: Fixup code testing if a pfn is direct mapped Yinghai Lu
2012-11-12 21:18               ` [PATCH 12/46] x86, mm: use pfn_range_is_mapped() with CPA Yinghai Lu
2012-11-12 21:18               ` [PATCH 13/46] x86, mm: use pfn_range_is_mapped() with gart Yinghai Lu
2012-11-12 21:18               ` [PATCH 14/46] x86, mm: use pfn_range_is_mapped() with reserve_initrd Yinghai Lu
2012-11-12 21:18               ` [PATCH 15/46] x86, mm: Only direct map addresses that are marked as E820_RAM Yinghai Lu
2012-11-12 21:18               ` [PATCH 16/46] x86, mm: relocate initrd under all mem for 64bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 17/46] x86, mm: Align start address to correct big page size Yinghai Lu
2012-11-12 21:18               ` [PATCH 18/46] x86, mm: Use big page size for small memory range Yinghai Lu
2012-11-12 21:18               ` [PATCH 19/46] x86, mm: Don't clear page table if range is ram Yinghai Lu
2012-11-12 21:18               ` [PATCH 20/46] x86, mm: Break down init_all_memory_mapping Yinghai Lu
2012-11-12 21:18               ` [PATCH 21/46] x86, mm: setup page table in top-down Yinghai Lu
2012-11-13 17:26                 ` Stefano Stabellini [this message]
2012-11-13 19:59                   ` Yinghai Lu
2012-11-13 20:01                     ` H. Peter Anvin
2012-11-13 20:36                       ` Yinghai Lu
2012-11-15 19:28                         ` Yinghai Lu
2012-11-12 21:18               ` [PATCH 22/46] x86, mm: Remove early_memremap workaround for page table accessing on 64bit Yinghai Lu
2012-11-13 16:52                 ` Stefano Stabellini
2012-11-12 21:18               ` [PATCH 23/46] x86, mm: Remove parameter in alloc_low_page for 64bit Yinghai Lu
2012-11-13 17:40                 ` Stefano Stabellini
2012-11-12 21:18               ` [PATCH 24/46] x86, mm: Merge alloc_low_page between 64bit and 32bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 25/46] x86, mm: Move min_pfn_mapped back to mm/init.c Yinghai Lu
2012-11-12 21:18               ` [PATCH 26/46] x86, mm, Xen: Remove mapping_pagetable_reserve() Yinghai Lu
2012-11-13 16:36                 ` Stefano Stabellini
2012-11-13 18:51                   ` Yinghai Lu
2012-11-14 11:19                     ` Stefano Stabellini
2012-11-12 21:18               ` [PATCH 27/46] x86, mm: Add alloc_low_pages(num) Yinghai Lu
2012-11-13 16:37                 ` Stefano Stabellini
2012-11-13 18:53                   ` Yinghai Lu
2012-11-12 21:18               ` [PATCH 28/46] x86, mm: Add pointer about Xen mmu requirement for alloc_low_pages Yinghai Lu
2012-11-13 16:38                 ` Stefano Stabellini
2012-11-13 17:56                   ` H. Peter Anvin
2012-11-13 18:58                     ` Yinghai Lu
2012-11-12 21:18               ` [PATCH 29/46] x86, mm: only call early_ioremap_page_table_range_init() once Yinghai Lu
2012-11-12 21:18               ` [PATCH 30/46] x86, mm: Move back pgt_buf_* to mm/init.c Yinghai Lu
2012-11-12 21:18               ` [PATCH 31/46] x86, mm: Move init_gbpages() out of setup.c Yinghai Lu
2012-11-12 21:18               ` [PATCH 32/46] x86, mm: change low/hignmem_pfn_init to static on 32bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 33/46] x86, mm: Move function declaration into mm_internal.h Yinghai Lu
2012-11-12 21:18               ` [PATCH 34/46] x86, mm: Add check before clear pte above max_low_pfn on 32bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 35/46] x86, mm: use round_up/down in split_mem_range() Yinghai Lu
2012-11-12 21:18               ` [PATCH 36/46] x86, mm: use PFN_DOWN " Yinghai Lu
2012-11-12 21:18               ` [PATCH 37/46] x86, mm: use pfn instead of pos in split_mem_range Yinghai Lu
2012-11-12 21:18               ` [PATCH 38/46] x86, mm: use limit_pfn for end pfn Yinghai Lu
2012-11-12 21:18               ` [PATCH 39/46] x86, mm: Unifying after_bootmem for 32bit and 64bit Yinghai Lu
2012-11-12 21:18               ` [PATCH 40/46] x86, mm: Move after_bootmem to mm_internel.h Yinghai Lu
2012-11-12 21:18               ` [PATCH 41/46] x86, mm: Use clamp_t() in init_range_memory_mapping Yinghai Lu
2012-11-12 21:18               ` [PATCH 42/46] x86, mm: kill numa_free_all_bootmem() Yinghai Lu
2012-11-12 21:18               ` [PATCH 43/46] x86, mm: kill numa_64.h Yinghai Lu
2012-11-12 21:18               ` [PATCH 44/46] sparc, mm: Remove calling of free_all_bootmem_node() Yinghai Lu
2012-11-12 21:18                 ` Yinghai Lu
2012-11-12 21:18               ` [PATCH 45/46] mm: Kill NO_BOOTMEM version free_all_bootmem_node() Yinghai Lu
2012-11-12 21:18               ` [PATCH 46/46] x86, mm: Let "memmap=" take more entries one time Yinghai Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.02.1211131707380.28049@kaball.uk.xensource.com \
    --to=stefano.stabellini@eu.citrix.com \
    --cc=akpm@linux-foundation.org \
    --cc=hpa@zytor.com \
    --cc=jacob.shin@amd.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=tglx@linutronix.de \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.