linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size
@ 2018-09-09 12:49 Baoquan He
  2018-09-09 12:49 ` [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region Baoquan He
                   ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Baoquan He @ 2018-09-09 12:49 UTC (permalink / raw)
  To: tglx, mingo, hpa, thgarnie, kirill.shutemov; +Cc: x86, linux-kernel, Baoquan He

In memory KASLR, __PHYSICAL_MASK_SHIFT is taken to calculate the
initial size of the direct mapping region. This is right in the
old code where __PHYSICAL_MASK_SHIFT was equal to MAX_PHYSMEM_BITS,
46bit, and only 4-level mode was supported.

Later, in commit:
b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52"),
__PHYSICAL_MASK_SHIFT was changed to be 52 always, no matter it's
5-level or 4-level. This is wrong for 4-level paging. Then when
adapt phyiscal memory region size based on available memory, it
will overflow if the amount of system RAM and the padding is bigger
than 64TB.

In fact, here MAX_PHYSMEM_BITS should be used instead. Fix it by
replacing __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

Fixes: b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52")
Signed-off-by: Baoquan He <bhe@redhat.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
---
 arch/x86/mm/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 61db77b0eda9..0988971069c9 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -93,7 +93,7 @@ void __init kernel_randomize_memory(void)
 	if (!kaslr_memory_enabled())
 		return;
 
-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
+	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
 
 	/*
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-09 12:49 [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Baoquan He
@ 2018-09-09 12:49 ` Baoquan He
  2018-09-10  6:11   ` Ingo Molnar
  2018-09-09 12:49 ` [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size Baoquan He
  2018-09-10  6:18 ` [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Ingo Molnar
  2 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-09 12:49 UTC (permalink / raw)
  To: tglx, mingo, hpa, thgarnie, kirill.shutemov; +Cc: x86, linux-kernel, Baoquan He

Vmemmap region has different maximum size depending on paging mode.
Now its size is hardcoded as 1TB in memory KASLR, this is not
right for 5-level paging mode. It will cause overflow if vmemmap
region is randomized to be adjacent to cpu_entry_area region and
its actual size is bigger than 1TB.

So here calculate how many TB by the actual size of vmemmap region
and align up to 1TB boundary.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 0988971069c9..1db8e166455e 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -51,7 +51,7 @@ static __initdata struct kaslr_memory_region {
 } kaslr_regions[] = {
 	{ &page_offset_base, 0 },
 	{ &vmalloc_base, 0 },
-	{ &vmemmap_base, 1 },
+	{ &vmemmap_base, 0 },
 };
 
 /* Get size in bytes used by the memory region */
@@ -77,6 +77,7 @@ void __init kernel_randomize_memory(void)
 	unsigned long rand, memory_tb;
 	struct rnd_state rand_state;
 	unsigned long remain_entropy;
+	unsigned long vmemmap_size;
 
 	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
 	vaddr = vaddr_start;
@@ -108,6 +109,14 @@ void __init kernel_randomize_memory(void)
 	if (memory_tb < kaslr_regions[0].size_tb)
 		kaslr_regions[0].size_tb = memory_tb;
 
+	/*
+	 * Calculate how many TB vmemmap region needs, and align to
+	 * 1TB boundary.
+	 * */
+	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
+		sizeof(struct page);
+	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
+
 	/* Calculate entropy available between regions */
 	remain_entropy = vaddr_end - vaddr_start;
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size
  2018-09-09 12:49 [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Baoquan He
  2018-09-09 12:49 ` [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region Baoquan He
@ 2018-09-09 12:49 ` Baoquan He
  2018-09-10 13:41   ` kbuild test robot
  2018-09-10  6:18 ` [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Ingo Molnar
  2 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-09 12:49 UTC (permalink / raw)
  To: tglx, mingo, hpa, thgarnie, kirill.shutemov; +Cc: x86, linux-kernel, Baoquan He

Size of struct page might be larger than 64 bytes if debug options
enabled, or fields added for debugging intentionally. Yet an upper
limit need be added at build time to trigger an alert in case the
size is too big to boot up system, warning people to check if it's
be done on purpose in advance.

Here 1/4 of PAGE_SIZE is chosen since system must have been insane
with this value. For those systems with PAGE_SIZE larger than 4KB,
1KB is simply taken.

Suggested-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
---
 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e62e26d41796..d3d1284bba77 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -67,6 +67,7 @@
 #include <linux/ftrace.h>
 #include <linux/lockdep.h>
 #include <linux/nmi.h>
+#include <linux/sizes.h>
 
 #include <asm/sections.h>
 #include <asm/tlbflush.h>
@@ -6849,6 +6850,7 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn)
 	unsigned long start_pfn, end_pfn;
 	int i, nid;
 
+	BUILD_BUG_ON(sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4));
 	/* Record where the zone boundaries are */
 	memset(arch_zone_lowest_possible_pfn, 0,
 				sizeof(arch_zone_lowest_possible_pfn));
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-09 12:49 ` [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region Baoquan He
@ 2018-09-10  6:11   ` Ingo Molnar
  2018-09-11  7:30     ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-09-10  6:11 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook


* Baoquan He <bhe@redhat.com> wrote:

> Vmemmap region has different maximum size depending on paging mode.
> Now its size is hardcoded as 1TB in memory KASLR, this is not
> right for 5-level paging mode. It will cause overflow if vmemmap
> region is randomized to be adjacent to cpu_entry_area region and
> its actual size is bigger than 1TB.
> 
> So here calculate how many TB by the actual size of vmemmap region
> and align up to 1TB boundary.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  arch/x86/mm/kaslr.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 0988971069c9..1db8e166455e 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -51,7 +51,7 @@ static __initdata struct kaslr_memory_region {
>  } kaslr_regions[] = {
>  	{ &page_offset_base, 0 },
>  	{ &vmalloc_base, 0 },
> -	{ &vmemmap_base, 1 },
> +	{ &vmemmap_base, 0 },
>  };
>  
>  /* Get size in bytes used by the memory region */
> @@ -77,6 +77,7 @@ void __init kernel_randomize_memory(void)
>  	unsigned long rand, memory_tb;
>  	struct rnd_state rand_state;
>  	unsigned long remain_entropy;
> +	unsigned long vmemmap_size;
>  
>  	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
>  	vaddr = vaddr_start;
> @@ -108,6 +109,14 @@ void __init kernel_randomize_memory(void)
>  	if (memory_tb < kaslr_regions[0].size_tb)
>  		kaslr_regions[0].size_tb = memory_tb;
>  
> +	/*
> +	 * Calculate how many TB vmemmap region needs, and align to
> +	 * 1TB boundary.
> +	 * */

Yeah, so that's not the standard comment style ...

> +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> +		sizeof(struct page);
> +	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);

So I tried to review what all this code does, and the comments aren't too great to explain the 
concepts.

For example:

/*
 * Memory regions randomized by KASLR (except modules that use a separate logic
 * earlier during boot). The list is ordered based on virtual addresses. This
 * order is kept after randomization.
 */
static __initdata struct kaslr_memory_region {
        unsigned long *base;
        unsigned long size_tb;
} kaslr_regions[] = {
        { &page_offset_base, 0 },
        { &vmalloc_base, 0 },
        { &vmemmap_base, 1 },
};

So I get the part where the 'base' pointer is essentially pointers to various global variables 
used by the MM to get the virtual base address of the kernel, vmalloc and vmemmap areas from, 
which base addresses can thus be modified by the very early KASLR code to dynamically shape the 
virtual memory layout of these kernel memory areas on a per bootup basis.

(BTW., that would be a great piece of information to add for the uninitiated. It's not like 
it's obvious!)

But what does 'size_tb' do? Nothing explains it and your patch doesn't make it clearer either. 
Also, get_padding() looks like an unnecessary layer of obfuscation:

/* Get size in bytes used by the memory region */
static inline unsigned long get_padding(struct kaslr_memory_region *region)
{
        return (region->size_tb << TB_SHIFT);
}

It's used only twice and we do bit shifts in the parent function anyway so it's not like it's 
hiding some uninteresting detail. (The style ugliness of the return statement makes it annoying 
as well.)

So could we please first clean up this code, explain it properly, name the fields properly, 
etc., before modifying it? Because it still looks unnecessarily hard to review. I.e. this early 
boot code needs improvements of quality and neither the base code nor your patches give me the 
impression of carefully created, easy to maintain code.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size
  2018-09-09 12:49 [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Baoquan He
  2018-09-09 12:49 ` [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region Baoquan He
  2018-09-09 12:49 ` [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size Baoquan He
@ 2018-09-10  6:18 ` Ingo Molnar
  2018-09-11  7:22   ` Baoquan He
  2 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-09-10  6:18 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel, Peter Zijlstra


* Baoquan He <bhe@redhat.com> wrote:

> In memory KASLR, __PHYSICAL_MASK_SHIFT is taken to calculate the
> initial size of the direct mapping region. This is right in the
> old code where __PHYSICAL_MASK_SHIFT was equal to MAX_PHYSMEM_BITS,
> 46bit, and only 4-level mode was supported.
> 
> Later, in commit:
> b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52"),
> __PHYSICAL_MASK_SHIFT was changed to be 52 always, no matter it's
> 5-level or 4-level. This is wrong for 4-level paging. Then when
> adapt phyiscal memory region size based on available memory, it
> will overflow if the amount of system RAM and the padding is bigger
> than 64TB.
> 
> In fact, here MAX_PHYSMEM_BITS should be used instead. Fix it by
> replacing __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.
> 
> Fixes: b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52")
> Signed-off-by: Baoquan He <bhe@redhat.com>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Reviewed-by: Thomas Garnier <thgarnie@google.com>

So this changelog has a handful of problems:

 - there's a typo in the title

 - what does 'memory KASLR' mean? All KASLR deals with memory.

 - there's a typo in the second paragraph

 - Please punctuate more precisely: '64TB' is written as '64 TB' and '46bit' is written as 
   '46 bits'

 - '52 always' is accurate but '52 bits always' would be more useful: write out units where
   appropriate to reduce  ambiguity and parsing complexity of changelogs. Also, in this
   particular sentence it should be 'always 52 bits'.

 - s/when adapt
    /when we adapt

 - s/This is right in the old code
    /This is correct in the old code

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size
  2018-09-09 12:49 ` [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size Baoquan He
@ 2018-09-10 13:41   ` kbuild test robot
  2018-09-11  7:47     ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: kbuild test robot @ 2018-09-10 13:41 UTC (permalink / raw)
  To: Baoquan He
  Cc: kbuild-all, tglx, mingo, hpa, thgarnie, kirill.shutemov, x86,
	linux-kernel, Baoquan He

[-- Attachment #1: Type: text/plain, Size: 5613 bytes --]

Hi Baoquan,

I love your patch! Yet something to improve:

[auto build test ERROR on tip/auto-latest]
[also build test ERROR on v4.19-rc2 next-20180906]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Baoquan-He/x86-mm-KASLR-Fix-the-wrong-calculation-of-kalsr-region-initial-size/20180910-205421
config: i386-randconfig-x077-201836 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-1) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from include/asm-generic/bug.h:5:0,
                    from arch/x86/include/asm/bug.h:83,
                    from include/linux/bug.h:5,
                    from include/linux/mmdebug.h:5,
                    from include/linux/mm.h:9,
                    from mm/page_alloc.c:18:
   mm/page_alloc.c: In function 'free_area_init_nodes':
   include/linux/kernel.h:845:29: warning: comparison of distinct pointer types lacks a cast
      (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
                                ^
   include/linux/compiler.h:335:18: note: in definition of macro '__compiletime_assert'
      int __cond = !(condition);    \
                     ^~~~~~~~~
   include/linux/compiler.h:358:2: note: in expansion of macro '_compiletime_assert'
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
     ^~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:45:37: note: in expansion of macro 'compiletime_assert'
    #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                        ^~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:69:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
     BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
     ^~~~~~~~~~~~~~~~
>> mm/page_alloc.c:6852:2: note: in expansion of macro 'BUILD_BUG_ON'
     BUILD_BUG_ON(sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4));
     ^~~~~~~~~~~~
   include/linux/kernel.h:859:4: note: in expansion of macro '__typecheck'
      (__typecheck(x, y) && __no_side_effects(x, y))
       ^~~~~~~~~~~
   include/linux/kernel.h:869:24: note: in expansion of macro '__safe_cmp'
     __builtin_choose_expr(__safe_cmp(x, y), \
                           ^~~~~~~~~~
   include/linux/kernel.h:878:19: note: in expansion of macro '__careful_cmp'
    #define min(x, y) __careful_cmp(x, y, <)
                      ^~~~~~~~~~~~~
>> mm/page_alloc.c:6852:37: note: in expansion of macro 'min'
     BUILD_BUG_ON(sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4));
                                        ^~~
>> include/linux/compiler.h:358:38: error: call to '__compiletime_assert_6852' declared with attribute error: BUILD_BUG_ON failed: sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4)
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
                                         ^
   include/linux/compiler.h:338:4: note: in definition of macro '__compiletime_assert'
       prefix ## suffix();    \
       ^~~~~~
   include/linux/compiler.h:358:2: note: in expansion of macro '_compiletime_assert'
     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
     ^~~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:45:37: note: in expansion of macro 'compiletime_assert'
    #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                        ^~~~~~~~~~~~~~~~~~
   include/linux/build_bug.h:69:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
     BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
     ^~~~~~~~~~~~~~~~
>> mm/page_alloc.c:6852:2: note: in expansion of macro 'BUILD_BUG_ON'
     BUILD_BUG_ON(sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4));
     ^~~~~~~~~~~~

vim +/__compiletime_assert_6852 +358 include/linux/compiler.h

9a8ab1c3 Daniel Santos 2013-02-21  344  
9a8ab1c3 Daniel Santos 2013-02-21  345  #define _compiletime_assert(condition, msg, prefix, suffix) \
9a8ab1c3 Daniel Santos 2013-02-21  346  	__compiletime_assert(condition, msg, prefix, suffix)
9a8ab1c3 Daniel Santos 2013-02-21  347  
9a8ab1c3 Daniel Santos 2013-02-21  348  /**
9a8ab1c3 Daniel Santos 2013-02-21  349   * compiletime_assert - break build and emit msg if condition is false
9a8ab1c3 Daniel Santos 2013-02-21  350   * @condition: a compile-time constant condition to check
9a8ab1c3 Daniel Santos 2013-02-21  351   * @msg:       a message to emit if condition is false
9a8ab1c3 Daniel Santos 2013-02-21  352   *
9a8ab1c3 Daniel Santos 2013-02-21  353   * In tradition of POSIX assert, this macro will break the build if the
9a8ab1c3 Daniel Santos 2013-02-21  354   * supplied condition is *false*, emitting the supplied error message if the
9a8ab1c3 Daniel Santos 2013-02-21  355   * compiler has support to do so.
9a8ab1c3 Daniel Santos 2013-02-21  356   */
9a8ab1c3 Daniel Santos 2013-02-21  357  #define compiletime_assert(condition, msg) \
9a8ab1c3 Daniel Santos 2013-02-21 @358  	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
9a8ab1c3 Daniel Santos 2013-02-21  359  

:::::: The code at line 358 was first introduced by commit
:::::: 9a8ab1c39970a4938a72d94e6fd13be88a797590 bug.h, compiler.h: introduce compiletime_assert & BUILD_BUG_ON_MSG

:::::: TO: Daniel Santos <daniel.santos@pobox.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 29299 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size
  2018-09-10  6:18 ` [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Ingo Molnar
@ 2018-09-11  7:22   ` Baoquan He
  0 siblings, 0 replies; 18+ messages in thread
From: Baoquan He @ 2018-09-11  7:22 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel, Peter Zijlstra

On 09/10/18 at 08:18am, Ingo Molnar wrote:
> 
> * Baoquan He <bhe@redhat.com> wrote:
> 
> > In memory KASLR, __PHYSICAL_MASK_SHIFT is taken to calculate the
> > initial size of the direct mapping region. This is right in the
> > old code where __PHYSICAL_MASK_SHIFT was equal to MAX_PHYSMEM_BITS,
> > 46bit, and only 4-level mode was supported.
>......
>  - what does 'memory KASLR' mean? All KASLR deals with memory.

Thanks for your reviewing. I have updated patch log according to your
comments. For this one, Thomas Garnier calls it memory section KASLR, to
differentiate with kernel text KASLR. I would like to call it memory region
KASLR, is it OK?

Paste the updated patch here.


From df0df638a24aafeee6862f184769c4fb96f29afb Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Fri, 29 Jun 2018 09:43:28 +0800
Subject: [PATCH] x86/mm/KASLR: Fix the wrong calculation of memory region
 initial size

In memory region KASLR, __PHYSICAL_MASK_SHIFT is taken to calculate
the initial size of the direct mapping region. This is correct in
the old code where __PHYSICAL_MASK_SHIFT was equal to MAX_PHYSMEM_BITS,
46 bits, and only 4-level mode was supported.

Later, in commit:
b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52"),
__PHYSICAL_MASK_SHIFT was changed to be always 52 bits, no matter it's
5-level or 4-level. This is wrong for 4-level paging. Then when we
adapt physical memory region size based on available memory, it
will overflow if the amount of system RAM and the padding is bigger
than 64 TB.

In fact, here MAX_PHYSMEM_BITS should be used instead. Fix it by
replacing __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 61db77b0eda9..0988971069c9 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -93,7 +93,7 @@ void __init kernel_randomize_memory(void)
 	if (!kaslr_memory_enabled())
 		return;
 
-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
+	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
 
 	/*
-- 
2.13.6




^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-10  6:11   ` Ingo Molnar
@ 2018-09-11  7:30     ` Baoquan He
  2018-09-11  7:59       ` Ingo Molnar
  0 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-11  7:30 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/10/18 at 08:11am, Ingo Molnar wrote:
> 
> * Baoquan He <bhe@redhat.com> wrote:
> 
> > @@ -108,6 +109,14 @@ void __init kernel_randomize_memory(void)
> >  	if (memory_tb < kaslr_regions[0].size_tb)
> >  		kaslr_regions[0].size_tb = memory_tb;
> >  
> > +	/*
> > +	 * Calculate how many TB vmemmap region needs, and align to
> > +	 * 1TB boundary.
> > +	 * */
> 
> Yeah, so that's not the standard comment style ...

Sorry for this, Will change. Thanks.

About clean up you suggested, I have made two patches and paste them at
below. Please help check if it's OK. Thanks a lot.

> So I get the part where the 'base' pointer is essentially pointers to various global variables 
> used by the MM to get the virtual base address of the kernel, vmalloc and vmemmap areas from, 
> which base addresses can thus be modified by the very early KASLR code to dynamically shape the 
> virtual memory layout of these kernel memory areas on a per bootup basis.
> 
> (BTW., that would be a great piece of information to add for the uninitiated. It's not like 
> it's obvious!)
> 
> But what does 'size_tb' do? Nothing explains it and your patch doesn't make it clearer either. 
> Also, get_padding() looks like an unnecessary layer of obfuscation:

Yes, I agree. Have made an patch according to your suggestion, paste
it here:

From c74b1e49eaa0e00335adbbb7e2d5df6d60518d4f Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Tue, 11 Sep 2018 14:34:26 +0800
Subject: [PATCH] x86/mm/KASLR: Add code comments to explain fields of struct
 kaslr_memory_region

Signed-off-by: Baoquan He <bhe@redhat.com>
Suggested-by: Ingo Molnar <mingo@kernel.org>

---
 arch/x86/mm/kaslr.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 91053cee7648..402984d8d729 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -41,9 +41,21 @@
 static const unsigned long vaddr_end = CPU_ENTRY_AREA_BASE;
 
 /*
- * Memory regions randomized by KASLR (except modules that use a separate logic
- * earlier during boot). The list is ordered based on virtual addresses. This
- * order is kept after randomization.
+ * Memory regions randomized by KASLR (except modules that use a separate
+ * logic earlier during boot). Currently they are the physical memory
+ * mapping, vmalloc and vmemmap regions, are ordered based on virtual
+ * addresses. The order is kept after randomization.
+ *
+ * @base: points to various global variables used by the MM to get the
+ * virtual base address of the above regions, which base addresses can
+ * thus be modified by the very early KASLR code to dynamically shape
+ * the virtual memory layout of these kernel memory regions on a per
+ * bootup basis.
+ *
+ * @size_tb: size in TB of each memory region. Thereinto, the size of
+ * the physical memory mapping region is variable, calculated according
+ * to the actual size of system RAM in order to save more space for
+ * randomization. The rest are fixed values related to paging mode.
  */
 static __initdata struct kaslr_memory_region {
 	unsigned long *base;

> /* Get size in bytes used by the memory region */
> static inline unsigned long get_padding(struct kaslr_memory_region *region)
> {
>         return (region->size_tb << TB_SHIFT);
> }

Yes, we can open code get_padding() as the following patch.

From e4cababd630af06085cb79a0bae9c00acd5272c0 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Tue, 11 Sep 2018 14:39:38 +0800
Subject: [PATCH] x86/mm/KASLR: Open code unnecessary function get_padding

It's used only twice and we do bit shifts in the parent function
anyway so it's not like it's hiding some uninteresting detail.

Signed-off-by: Baoquan He <bhe@redhat.com>
Suggested-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/mm/kaslr.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 9774b6e39f63..91053cee7648 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -54,12 +54,6 @@ static __initdata struct kaslr_memory_region {
 	{ &vmemmap_base, 0 },
 };
 
-/* Get size in bytes used by the memory region */
-static inline unsigned long get_padding(struct kaslr_memory_region *region)
-{
-	return (region->size_tb << TB_SHIFT);
-}
-
 /*
  * Apply no randomization if KASLR was disabled at boot or if KASAN
  * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
@@ -120,7 +114,7 @@ void __init kernel_randomize_memory(void)
 	/* Calculate entropy available between regions */
 	remain_entropy = vaddr_end - vaddr_start;
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-		remain_entropy -= get_padding(&kaslr_regions[i]);
+		remain_entropy -= kaslr_regions[i].size_tb << TB_SHIFT;
 
 	prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
 
@@ -144,7 +138,7 @@ void __init kernel_randomize_memory(void)
 		 * Jump the region and add a minimum padding based on
 		 * randomization alignment.
 		 */
-		vaddr += get_padding(&kaslr_regions[i]);
+		vaddr += kaslr_regions[i].size_tb << TB_SHIFT;
 		if (pgtable_l5_enabled())
 			vaddr = round_up(vaddr + 1, P4D_SIZE);
 		else
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size
  2018-09-10 13:41   ` kbuild test robot
@ 2018-09-11  7:47     ` Baoquan He
  0 siblings, 0 replies; 18+ messages in thread
From: Baoquan He @ 2018-09-11  7:47 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, tglx, mingo, hpa, thgarnie, kirill.shutemov, x86,
	linux-kernel

On 09/10/18 at 09:41pm, kbuild test robot wrote:
>    include/linux/build_bug.h:69:2: note: in expansion of macro 'BUILD_BUG_ON_MSG'
>      BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>      ^~~~~~~~~~~~~~~~
> >> mm/page_alloc.c:6852:2: note: in expansion of macro 'BUILD_BUG_ON'
>      BUILD_BUG_ON(sizeof(struct page) < min(SZ_1K, PAGE_SIZE/4));
>      ^~~~~~~~~~~~

Thanks, below code can mute the compiling warning. Will update and
repost.

+       BUILD_BUG_ON(sizeof(struct page) > min((size_t)SZ_1K, PAGE_SIZE));

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-11  7:30     ` Baoquan He
@ 2018-09-11  7:59       ` Ingo Molnar
  2018-09-11  8:18         ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-09-11  7:59 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook


* Baoquan He <bhe@redhat.com> wrote:

>  /*
> + * Memory regions randomized by KASLR (except modules that use a separate
> + * logic earlier during boot). Currently they are the physical memory
> + * mapping, vmalloc and vmemmap regions, are ordered based on virtual
> + * addresses. The order is kept after randomization.
> + *
> + * @base: points to various global variables used by the MM to get the
> + * virtual base address of the above regions, which base addresses can
> + * thus be modified by the very early KASLR code to dynamically shape
> + * the virtual memory layout of these kernel memory regions on a per
> + * bootup basis.
> + *
> + * @size_tb: size in TB of each memory region. Thereinto, the size of
> + * the physical memory mapping region is variable, calculated according
> + * to the actual size of system RAM in order to save more space for
> + * randomization. The rest are fixed values related to paging mode.
>   */
>  static __initdata struct kaslr_memory_region {
>  	unsigned long *base;

LGTM mostly, except the @size_tb field, see my comments further below.

Here's an edited version:

/*
 * 'struct kasl_memory_region' entries represent continuous chunks of
 * kernel virtual memory regions, to be randomized by KASLR.
 *
 * ( The exception is the module space virtual memory window which
 *   uses separate logic earlier during bootup. )
 *
 * Currently there are three such regions: the physical memory mapping,
 * vmalloc and vmemmap regions.
 *
 * The array below has the entries ordered based on virtual addresses.
 * The order is kept after randomization, i.e. the randomized
 * virtual addresses of these regions are still ascending.
 *
 * Here are the fields:
 *
 * @base: points to a global variable used by the MM to get the
 * virtual base address of any of the above regions. This allows the
 * early KASLR code to modify these base addresses early during bootup,
 * on a per bootup basis, without the MM code even being aware of whether
 * it got changed and to what value.
 *
 * When KASLR is active then the MM code makes sure that for each region
 * there's such a single, dynamic, global base address 'unsigned long'
 * variable available for the KASLR code to point to and modify directly:
 *
 *       { &page_offset_base, 0 },
 *       { &vmalloc_base,     0 },
 *       { &vmemmap_base,     1 },
 *
 * @size_tb: size in TB of each memory region. Thereinto, the size of
 * the physical memory mapping region is variable, calculated according
 * to the actual size of system RAM in order to save more space for
 * randomization. The rest are fixed values related to paging mode.
 */

The role of @size_tb is still murky to me. What is it telling us?
Maximum virtual memory range to randomize into? Why does this depend
on system RAM at all - aren't these all virtual addresses in a 64-bit
(well, 48-bit or 56-bit) address ranges?

I could read the code to figure this out, but the comment should already
explain this and it doesn't.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-11  7:59       ` Ingo Molnar
@ 2018-09-11  8:18         ` Baoquan He
  2018-09-11  9:28           ` Ingo Molnar
  0 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-11  8:18 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/11/18 at 09:59am, Ingo Molnar wrote:
> 
> * Baoquan He <bhe@redhat.com> wrote:
> 
> >  /*
> > + * Memory regions randomized by KASLR (except modules that use a separate
> > + * logic earlier during boot). Currently they are the physical memory
> > + * mapping, vmalloc and vmemmap regions, are ordered based on virtual
> > + * addresses. The order is kept after randomization.
> > + *
> > + * @base: points to various global variables used by the MM to get the
> > + * virtual base address of the above regions, which base addresses can
> > + * thus be modified by the very early KASLR code to dynamically shape
> > + * the virtual memory layout of these kernel memory regions on a per
> > + * bootup basis.
> > + *
> > + * @size_tb: size in TB of each memory region. Thereinto, the size of
> > + * the physical memory mapping region is variable, calculated according
> > + * to the actual size of system RAM in order to save more space for
> > + * randomization. The rest are fixed values related to paging mode.
> >   */
> >  static __initdata struct kaslr_memory_region {
> >  	unsigned long *base;
> 
> LGTM mostly, except the @size_tb field, see my comments further below.
> 
> Here's an edited version:
> 
> /*
>  * 'struct kasl_memory_region' entries represent continuous chunks of
>  * kernel virtual memory regions, to be randomized by KASLR.
>  *
>  * ( The exception is the module space virtual memory window which
>  *   uses separate logic earlier during bootup. )
>  *
>  * Currently there are three such regions: the physical memory mapping,
>  * vmalloc and vmemmap regions.
>  *
>  * The array below has the entries ordered based on virtual addresses.
>  * The order is kept after randomization, i.e. the randomized
>  * virtual addresses of these regions are still ascending.
>  *
>  * Here are the fields:
>  *
>  * @base: points to a global variable used by the MM to get the
>  * virtual base address of any of the above regions. This allows the
>  * early KASLR code to modify these base addresses early during bootup,
>  * on a per bootup basis, without the MM code even being aware of whether
>  * it got changed and to what value.
>  *
>  * When KASLR is active then the MM code makes sure that for each region
>  * there's such a single, dynamic, global base address 'unsigned long'
>  * variable available for the KASLR code to point to and modify directly:

>  *
>  *       { &page_offset_base, 0 },
>  *       { &vmalloc_base,     0 },
>  *       { &vmemmap_base,     1 },
>  *
>  * @size_tb: size in TB of each memory region. Thereinto, the size of
>  * the physical memory mapping region is variable, calculated according
>  * to the actual size of system RAM in order to save more space for
>  * randomization. The rest are fixed values related to paging mode.
>  */
> 
> The role of @size_tb is still murky to me. What is it telling us?
> Maximum virtual memory range to randomize into? Why does this depend
> on system RAM at all - aren't these all virtual addresses in a 64-bit
> (well, 48-bit or 56-bit) address ranges?

* @size_tb: size in TB of each memory region. Thereinto, the size of
* the physical memory mapping region is variable, calculated according
* to the actual size of system RAM. Since most of systems own RAM memory
* which is much less than 64 TB which is reserved for mapping the maximum
* physical memory in 4-level paging mode, not to mention 5-level. The
* left space can be saved to enhance randomness.
* 
How about this? And please forgive my poor english.

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-11  8:18         ` Baoquan He
@ 2018-09-11  9:28           ` Ingo Molnar
  2018-09-11 12:08             ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2018-09-11  9:28 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook


* Baoquan He <bhe@redhat.com> wrote:

> On 09/11/18 at 09:59am, Ingo Molnar wrote:
> > 
> > * Baoquan He <bhe@redhat.com> wrote:
> > 
> > >  /*
> > > + * Memory regions randomized by KASLR (except modules that use a separate
> > > + * logic earlier during boot). Currently they are the physical memory
> > > + * mapping, vmalloc and vmemmap regions, are ordered based on virtual
> > > + * addresses. The order is kept after randomization.
> > > + *
> > > + * @base: points to various global variables used by the MM to get the
> > > + * virtual base address of the above regions, which base addresses can
> > > + * thus be modified by the very early KASLR code to dynamically shape
> > > + * the virtual memory layout of these kernel memory regions on a per
> > > + * bootup basis.
> > > + *
> > > + * @size_tb: size in TB of each memory region. Thereinto, the size of
> > > + * the physical memory mapping region is variable, calculated according
> > > + * to the actual size of system RAM in order to save more space for
> > > + * randomization. The rest are fixed values related to paging mode.
> > >   */
> > >  static __initdata struct kaslr_memory_region {
> > >  	unsigned long *base;
> > 
> > LGTM mostly, except the @size_tb field, see my comments further below.
> > 
> > Here's an edited version:
> > 
> > /*
> >  * 'struct kasl_memory_region' entries represent continuous chunks of
> >  * kernel virtual memory regions, to be randomized by KASLR.
> >  *
> >  * ( The exception is the module space virtual memory window which
> >  *   uses separate logic earlier during bootup. )
> >  *
> >  * Currently there are three such regions: the physical memory mapping,
> >  * vmalloc and vmemmap regions.
> >  *
> >  * The array below has the entries ordered based on virtual addresses.
> >  * The order is kept after randomization, i.e. the randomized
> >  * virtual addresses of these regions are still ascending.
> >  *
> >  * Here are the fields:
> >  *
> >  * @base: points to a global variable used by the MM to get the
> >  * virtual base address of any of the above regions. This allows the
> >  * early KASLR code to modify these base addresses early during bootup,
> >  * on a per bootup basis, without the MM code even being aware of whether
> >  * it got changed and to what value.
> >  *
> >  * When KASLR is active then the MM code makes sure that for each region
> >  * there's such a single, dynamic, global base address 'unsigned long'
> >  * variable available for the KASLR code to point to and modify directly:
> 
> >  *
> >  *       { &page_offset_base, 0 },
> >  *       { &vmalloc_base,     0 },
> >  *       { &vmemmap_base,     1 },
> >  *
> >  * @size_tb: size in TB of each memory region. Thereinto, the size of
> >  * the physical memory mapping region is variable, calculated according
> >  * to the actual size of system RAM in order to save more space for
> >  * randomization. The rest are fixed values related to paging mode.
> >  */
> > 
> > The role of @size_tb is still murky to me. What is it telling us?
> > Maximum virtual memory range to randomize into? Why does this depend
> > on system RAM at all - aren't these all virtual addresses in a 64-bit
> > (well, 48-bit or 56-bit) address ranges?
> 
> * @size_tb: size in TB of each memory region. Thereinto, the size of
> * the physical memory mapping region is variable, calculated according
> * to the actual size of system RAM. Since most of systems own RAM memory
> * which is much less than 64 TB which is reserved for mapping the maximum
> * physical memory in 4-level paging mode, not to mention 5-level. The
> * left space can be saved to enhance randomness.
> * 
> How about this?

Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
else to go read about the background.

For example what is the range of randomization of each region? Assuming the static, 
non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
KASLR modify that layout?

All of this is very opaque and not explained very well anywhere that I could find. We need to 
generate a proper description ASAP.

> And please forgive my poor english.

No problem, I can prettify it afterwards, but the information is not there yet to prettify. :)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-11  9:28           ` Ingo Molnar
@ 2018-09-11 12:08             ` Baoquan He
  2018-09-12  3:18               ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-11 12:08 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/11/18 at 11:28am, Ingo Molnar wrote:
> Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
> whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
> else to go read about the background.
> 
> For example what is the range of randomization of each region? Assuming the static, 
> non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
> KASLR modify that layout?
> 
> All of this is very opaque and not explained very well anywhere that I could find. We need to 
> generate a proper description ASAP.

OK, let me try to give an context with my understanding. And copy the
static layout of memory regions at below for reference.

Here, Documentation/x86/x86_64/mm.txt is correct, and it's the
guideline for us to manipulate the layout of kernel memory regions.
Originally the starting address of each region is aligned to 512GB
so that they are all mapped at the 0-th entry of PGD table in 4-level
page mapping. Since we are so rich to have 120 TB virtual address space,
they are aligned at 1 TB actually. So randomness comes from three parts
mainly:

1) The direct mapping region for physical memory. 64 TB are reserved to
cover the maximum physical memory support. However, most of systems only
have much less RAM memory than 64 TB, even much less than 1 TB most of
time. We can take the superfluous to join the randomization. This is
often the biggest part.

2) The hole between memory regions, even though they are only 1 TB.

3) KASAN region takes up 16 TB, while it won't take effect when KASLR is
enabled. This is another big part. 

With this superfluous address space as well as changing the starting address
of each memory region to be PUD level, namely 1 GB aligned, we can have
thousands of candidate position to locate those three memory regions.

Above is for 4-level paging mode . As for 5-level, since the virtual
address space is too big, Kirill makes the starting address of regions
P4D aligned, namely 512 GB.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory                                                                   
136T - 200T = 64TB
ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
200T - 201T = 1TB
ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
201T - 233T = 32TB
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
233T - 234T = 1TB
ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
234T - 235T = 1TB
... unused hole ...
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
236T - 252T = 16TB
... unused hole ...

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-11 12:08             ` Baoquan He
@ 2018-09-12  3:18               ` Baoquan He
  2018-09-12  6:31                 ` Ingo Molnar
  0 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-12  3:18 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/11/18 at 08:08pm, Baoquan He wrote:
> On 09/11/18 at 11:28am, Ingo Molnar wrote:
> > Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
> > whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
> > else to go read about the background.
> > 
> > For example what is the range of randomization of each region? Assuming the static, 
> > non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
> > KASLR modify that layout?

Re-read this paragraph, found I missed saying the range for each memory
region, and in what way KASLR modify the layout.

> > 
> > All of this is very opaque and not explained very well anywhere that I could find. We need to 
> > generate a proper description ASAP.
> 
> OK, let me try to give an context with my understanding. And copy the
> static layout of memory regions at below for reference.
> 
Here, Documentation/x86/x86_64/mm.txt is correct, and it's the
guideline for us to manipulate the layout of kernel memory regions.
Originally the starting address of each region is aligned to 512GB
so that they are all mapped at the 0-th entry of PGD table in 4-level
page mapping. Since we are so rich to have 120 TB virtual address space,
they are aligned at 1 TB actually. So randomness comes from three parts
mainly:

1) The direct mapping region for physical memory. 64 TB are reserved to
cover the maximum physical memory support. However, most of systems only
have much less RAM memory than 64 TB, even much less than 1 TB most of
time. We can take the superfluous to join the randomization. This is
often the biggest part.

2) The hole between memory regions, even though they are only 1 TB.

3) KASAN region takes up 16 TB, while it won't take effect when KASLR is
enabled. This is another big part. 

As you can see, in these three memory regions, the physical memory
mapping region has variable size according to the existing system RAM.
However, the remaining two memory regions have fixed size, vmalloc is 32
TB, vmemmap is 1 TB.

With this superfluous address space as well as changing the starting address
of each memory region to be PUD level, namely 1 GB aligned, we can have
thousands of candidate position to locate those three memory regions.

Above is for 4-level paging mode . As for 5-level, since the virtual
address space is too big, Kirill makes the starting address of regions
P4D aligned, namely 512 GB.

When randomize the layout, their order are kept, still the physical
memory mapping region is handled fistly, next vmalloc and vmemmap. Let's
take the physical memory mapping region as example, we limit the
starting address to be taken from the 1st 1/3 part of the whole
available virtual address space which is from 0xffff880000000000 to
0xfffffe0000000000, namely the original starting address of the physical
memory mapping region to the starting address of cpu_entry_area mapping
region. Once a random address is chosen for the physical memory mapping,
we jump over the region and add 1G to begin the next region handling
with the remaining available space.


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory                                                                   
136T - 200T = 64TB
ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
200T - 201T = 1TB
ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
201T - 233T = 32TB
ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
233T - 234T = 1TB
ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
234T - 235T = 1TB
... unused hole ...
ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
236T - 252T = 16TB
... unused hole ...
                                    vaddr_end for KASLR
fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
254T - 254T+512G

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-12  3:18               ` Baoquan He
@ 2018-09-12  6:31                 ` Ingo Molnar
  2018-09-12  9:41                   ` Baoquan He
  2018-09-21  2:10                   ` Baoquan He
  0 siblings, 2 replies; 18+ messages in thread
From: Ingo Molnar @ 2018-09-12  6:31 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook


* Baoquan He <bhe@redhat.com> wrote:

> On 09/11/18 at 08:08pm, Baoquan He wrote:
> > On 09/11/18 at 11:28am, Ingo Molnar wrote:
> > > Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
> > > whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
> > > else to go read about the background.
> > > 
> > > For example what is the range of randomization of each region? Assuming the static, 
> > > non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
> > > KASLR modify that layout?
> 
> Re-read this paragraph, found I missed saying the range for each memory
> region, and in what way KASLR modify the layout.
> 
> > > 
> > > All of this is very opaque and not explained very well anywhere that I could find. We need to 
> > > generate a proper description ASAP.
> > 
> > OK, let me try to give an context with my understanding. And copy the
> > static layout of memory regions at below for reference.
> > 
> Here, Documentation/x86/x86_64/mm.txt is correct, and it's the
> guideline for us to manipulate the layout of kernel memory regions.
> Originally the starting address of each region is aligned to 512GB
> so that they are all mapped at the 0-th entry of PGD table in 4-level
> page mapping. Since we are so rich to have 120 TB virtual address space,
> they are aligned at 1 TB actually. So randomness comes from three parts
> mainly:
> 
> 1) The direct mapping region for physical memory. 64 TB are reserved to
> cover the maximum physical memory support. However, most of systems only
> have much less RAM memory than 64 TB, even much less than 1 TB most of
> time. We can take the superfluous to join the randomization. This is
> often the biggest part.

So i.e. in the non-KASLR case we have this description (from mm.txt):

 ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
 ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
 ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
 ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
 ... unused hole ...
 ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
 ... unused hole ...
                                     vaddr_end for KASLR
 fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
 ...

The problems start here, this map is already *horribly* confusing:

 - we mix size in TB with 'bits'
 - we sometimes mention a size in the description and sometimes not
 - we sometimes list holes by address, sometimes only as an 'unused hole' line ...

So how about first cleaning up the memory maps in mm.txt and streamlining them, like this:

 ffff880000000000 - ffffc7ffffffffff (=46 bits, 64 TB) direct mapping of all phys. memory (page_offset_base)
 ffffc80000000000 - ffffc8ffffffffff (=40 bits,  1 TB) ... unused hole
 ffffc90000000000 - ffffe8ffffffffff (=45 bits, 32 TB) vmalloc/ioremap space (vmalloc_base)
 ffffe90000000000 - ffffe9ffffffffff (=40 bits,  1 TB) ... unused hole
 ffffea0000000000 - ffffeaffffffffff (=40 bits,  1 TB) virtual memory map (vmemmap_base)
 ffffeb0000000000 - ffffebffffffffff (=40 bits,  1 TB) ... unused hole
 ffffec0000000000 - fffffbffffffffff (=44 bits, 16 TB) KASAN shadow memory
 fffffc0000000000 - fffffdffffffffff (=41 bits,  2 TB) ... unused hole
                                     vaddr_end for KASLR
 fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
 ...

Please double check all the calculations and ranges, and I'd suggest doing it for the whole 
file. Note how I added the global variables describing the base addresses - this makes it very 
easy to match the pointers in kaslr_regions[] to the static map, to see the intent of 
kaslr_regions[].

BTW., isn't that 'vaddr_end for KASLR' entry position inaccurate? In the typical case it could 
very well be that by chance all 3 areas end up being randomized into the first 64 TB region, 
right?

I.e. vaddr_end could be at any 1 TB boundary in the above ranges. I'd suggest leaving out all 
KASLR from this static mappings table - explain it separately in this file, maybe even create 
its own memory map. I'll help with the wording.

> 2) The hole between memory regions, even though they are only 1 TB.

There's a 2 TB hole too.

> 3) KASAN region takes up 16 TB, while it won't take effect when KASLR is
> enabled. This is another big part. 

Ok.

> As you can see, in these three memory regions, the physical memory
> mapping region has variable size according to the existing system RAM.
> However, the remaining two memory regions have fixed size, vmalloc is 32
> TB, vmemmap is 1 TB.
> 
> With this superfluous address space as well as changing the starting address
> of each memory region to be PUD level, namely 1 GB aligned, we can have
> thousands of candidate position to locate those three memory regions.

Would be nice provide the number of bits randomized, maximum, from which the number of GBs of 
physical RAM has to be subtracted.

Because 'thousands' of randomization targets is *excessively* poor randomization - caused by 
the ridiculously high rounding to 1GB. It would be _very_ nice to extend randomization to at 
least 2MB boundaries instead. (If the half cacheline of PTE entries possibly 'wasted' is an 
issue we could increase that to 128 MB, but should start with 2MB first.)

That would instantly multiply the randomization selection by 512 ...

> Above is for 4-level paging mode . As for 5-level, since the virtual
> address space is too big, Kirill makes the starting address of regions
> P4D aligned, namely 512 GB.

512 GB of every region? That's ridiculously poor randomization too: we should *utilize* the 
extra randomness and match the randomization on 56 bits CPUs as well, instead of wasting it!

> When randomize the layout, their order are kept, still the physical
> memory mapping region is handled fistly, next vmalloc and vmemmap. Let's
> take the physical memory mapping region as example, we limit the
> starting address to be taken from the 1st 1/3 part of the whole
> available virtual address space which is from 0xffff880000000000 to
> 0xfffffe0000000000, namely the original starting address of the physical
> memory mapping region to the starting address of cpu_entry_area mapping
> region. Once a random address is chosen for the physical memory mapping,
> we jump over the region and add 1G to begin the next region handling
> with the remaining available space.

Ok, makes sense now!

I'd suggest adding an explanation like this to @size_tb:

  @size_tb is physical RAM size, rounded up to the next 1 TB boundary so that the base 
  addresses following this region still start on 1 TB boundaries.

Once we improve randomization to be at the 2 MB granularity this should be renamed 
->size_rounded_up or so.

Would you like to work on this? These would be really nice additions, once the code is cleaned 
up to be maintainable and the pending bug fixes you have are merged.

In terms of patch logistics I'd suggest this ordering:

 - documentation fixes
 - simple cleanups
 - fixes
 - enhancements

With no more than ~5 patches sent in a series. Feel free to integrate all pending 
boot-memory-map fixes and features as well, we'll figure out the right way to do them as they 
happen - but let's start with the simple stuff first, ok?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-12  6:31                 ` Ingo Molnar
@ 2018-09-12  9:41                   ` Baoquan He
  2018-09-12 10:01                     ` Ingo Molnar
  2018-09-21  2:10                   ` Baoquan He
  1 sibling, 1 reply; 18+ messages in thread
From: Baoquan He @ 2018-09-12  9:41 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/12/18 at 08:31am, Ingo Molnar wrote:
> 
> * Baoquan He <bhe@redhat.com> wrote:
> 
> > On 09/11/18 at 08:08pm, Baoquan He wrote:
> > > On 09/11/18 at 11:28am, Ingo Molnar wrote:
> > > > Yeah, so proper context is still missing, this paragraph appears to assume from the reader a 
> > > > whole lot of prior knowledge, and this is one of the top comments in kaslr.c so there's nowhere 
> > > > else to go read about the background.
> > > > 
> > > > For example what is the range of randomization of each region? Assuming the static, 
> > > > non-randomized description in Documentation/x86/x86_64/mm.txt is correct, in what way does 
> > > > KASLR modify that layout?
> > 
> > Re-read this paragraph, found I missed saying the range for each memory
> > region, and in what way KASLR modify the layout.
> > 
> > > > 
> > > > All of this is very opaque and not explained very well anywhere that I could find. We need to 
> > > > generate a proper description ASAP.
> > > 
> > > OK, let me try to give an context with my understanding. And copy the
> > > static layout of memory regions at below for reference.
> > > 
> > Here, Documentation/x86/x86_64/mm.txt is correct, and it's the
> > guideline for us to manipulate the layout of kernel memory regions.
> > Originally the starting address of each region is aligned to 512GB
> > so that they are all mapped at the 0-th entry of PGD table in 4-level
> > page mapping. Since we are so rich to have 120 TB virtual address space,
> > they are aligned at 1 TB actually. So randomness comes from three parts
> > mainly:
> > 
> > 1) The direct mapping region for physical memory. 64 TB are reserved to
> > cover the maximum physical memory support. However, most of systems only
> > have much less RAM memory than 64 TB, even much less than 1 TB most of
> > time. We can take the superfluous to join the randomization. This is
> > often the biggest part.
> 
> So i.e. in the non-KASLR case we have this description (from mm.txt):
> 
>  ffff880000000000 - ffffc7ffffffffff (=64 TB) direct mapping of all phys. memory
>  ffffc80000000000 - ffffc8ffffffffff (=40 bits) hole
>  ffffc90000000000 - ffffe8ffffffffff (=45 bits) vmalloc/ioremap space
>  ffffe90000000000 - ffffe9ffffffffff (=40 bits) hole
>  ffffea0000000000 - ffffeaffffffffff (=40 bits) virtual memory map (1TB)
>  ... unused hole ...
>  ffffec0000000000 - fffffbffffffffff (=44 bits) kasan shadow memory (16TB)
>  ... unused hole ...
>                                      vaddr_end for KASLR
>  fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
>  ...
> 
> The problems start here, this map is already *horribly* confusing:
> 
>  - we mix size in TB with 'bits'
>  - we sometimes mention a size in the description and sometimes not
>  - we sometimes list holes by address, sometimes only as an 'unused hole' line ...
> 
> So how about first cleaning up the memory maps in mm.txt and streamlining them, like this:
> 
>  ffff880000000000 - ffffc7ffffffffff (=46 bits, 64 TB) direct mapping of all phys. memory (page_offset_base)
>  ffffc80000000000 - ffffc8ffffffffff (=40 bits,  1 TB) ... unused hole
>  ffffc90000000000 - ffffe8ffffffffff (=45 bits, 32 TB) vmalloc/ioremap space (vmalloc_base)
>  ffffe90000000000 - ffffe9ffffffffff (=40 bits,  1 TB) ... unused hole
>  ffffea0000000000 - ffffeaffffffffff (=40 bits,  1 TB) virtual memory map (vmemmap_base)
>  ffffeb0000000000 - ffffebffffffffff (=40 bits,  1 TB) ... unused hole
>  ffffec0000000000 - fffffbffffffffff (=44 bits, 16 TB) KASAN shadow memory
>  fffffc0000000000 - fffffdffffffffff (=41 bits,  2 TB) ... unused hole
>                                      vaddr_end for KASLR
>  fffffe0000000000 - fffffe7fffffffff (=39 bits) cpu_entry_area mapping
>  ...
> 
> Please double check all the calculations and ranges, and I'd suggest doing it for the whole 
> file. Note how I added the global variables describing the base addresses - this makes it very 
> easy to match the pointers in kaslr_regions[] to the static map, to see the intent of 
> kaslr_regions[].

OK.

> 
> BTW., isn't that 'vaddr_end for KASLR' entry position inaccurate? In the typical case it could 
> very well be that by chance all 3 areas end up being randomized into the first 64 TB region, 
> right?

Hmm, I think it means the whole space where KASLR can be allowed to
randomize. [vaddr_start, vaddr_end] is a scope, KASLR algorithm can
only move memory regions inside this area. It doesn't mean the final
result of KASLR, or any typical case of them.

vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
vaddr_end = CPU_ENTRY_AREA_BASE;

> 
> I.e. vaddr_end could be at any 1 TB boundary in the above ranges. I'd suggest leaving out all 
> KASLR from this static mappings table - explain it separately in this file, maybe even create 
> its own memory map. I'll help with the wording.
> 
> > 2) The hole between memory regions, even though they are only 1 TB.
> 
> There's a 2 TB hole too.

Yeah, the last one.

> 
> > 3) KASAN region takes up 16 TB, while it won't take effect when KASLR is
> > enabled. This is another big part. 
> 
> Ok.
> 
> > As you can see, in these three memory regions, the physical memory
> > mapping region has variable size according to the existing system RAM.
> > However, the remaining two memory regions have fixed size, vmalloc is 32
> > TB, vmemmap is 1 TB.
> > 
> > With this superfluous address space as well as changing the starting address
> > of each memory region to be PUD level, namely 1 GB aligned, we can have
> > thousands of candidate position to locate those three memory regions.
> 
> Would be nice provide the number of bits randomized, maximum, from which the number of GBs of 
> physical RAM has to be subtracted.
> 
> Because 'thousands' of randomization targets is *excessively* poor randomization - caused by 
> the ridiculously high rounding to 1GB. It would be _very_ nice to extend randomization to at 
> least 2MB boundaries instead. (If the half cacheline of PTE entries possibly 'wasted' is an 
> issue we could increase that to 128 MB, but should start with 2MB first.)
> 
> That would instantly multiply the randomization selection by 512 ...

This may involve critical code changes. E.g in below commit, when we
copy page table, we just need go deep into PUD level since PAGE_OFFSET
is PUD_SIZE aligned, now if 2M aligned, we need deep into PMD level. I
can only think of this about this issue. Surely, I can do more
investigation and see what need be done to achieve the goal.

commit 94133e46a0f5ca3f138479806104ab4a8cb0455e
Author: Baoquan He <bhe@redhat.com>
Date:   Fri May 26 12:36:50 2017 +0100

    x86/efi: Correct EFI identity mapping under 'efi=old_map' when KASLR is enabled

> 
> > Above is for 4-level paging mode . As for 5-level, since the virtual
> > address space is too big, Kirill makes the starting address of regions
> > P4D aligned, namely 512 GB.
> 
> 512 GB of every region? That's ridiculously poor randomization too: we should *utilize* the 
> extra randomness and match the randomization on 56 bits CPUs as well, instead of wasting it!
> 
> > When randomize the layout, their order are kept, still the physical
> > memory mapping region is handled fistly, next vmalloc and vmemmap. Let's
> > take the physical memory mapping region as example, we limit the
> > starting address to be taken from the 1st 1/3 part of the whole
> > available virtual address space which is from 0xffff880000000000 to
> > 0xfffffe0000000000, namely the original starting address of the physical
> > memory mapping region to the starting address of cpu_entry_area mapping
> > region. Once a random address is chosen for the physical memory mapping,
> > we jump over the region and add 1G to begin the next region handling
> > with the remaining available space.
> 
> Ok, makes sense now!
> 
> I'd suggest adding an explanation like this to @size_tb:
> 
>   @size_tb is physical RAM size, rounded up to the next 1 TB boundary so that the base 
>   addresses following this region still start on 1 TB boundaries.
> 
> Once we improve randomization to be at the 2 MB granularity this should be renamed 
> ->size_rounded_up or so.
> 
> Would you like to work on this? These would be really nice additions, once the code is cleaned 
> up to be maintainable and the pending bug fixes you have are merged.
> 
> In terms of patch logistics I'd suggest this ordering:
> 
>  - documentation fixes
>  - simple cleanups
>  - fixes
>  - enhancements
> 
> With no more than ~5 patches sent in a series. Feel free to integrate all pending 
> boot-memory-map fixes and features as well, we'll figure out the right way to do them as they 
> happen - but let's start with the simple stuff first, ok?

Sure, will do according to your suggestion.

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-12  9:41                   ` Baoquan He
@ 2018-09-12 10:01                     ` Ingo Molnar
  0 siblings, 0 replies; 18+ messages in thread
From: Ingo Molnar @ 2018-09-12 10:01 UTC (permalink / raw)
  To: Baoquan He
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook


* Baoquan He <bhe@redhat.com> wrote:

> > BTW., isn't that 'vaddr_end for KASLR' entry position inaccurate? In the typical case it could 
> > very well be that by chance all 3 areas end up being randomized into the first 64 TB region, 
> > right?
> 
> Hmm, I think it means the whole space where KASLR can be allowed to
> randomize. [vaddr_start, vaddr_end] is a scope, KASLR algorithm can
> only move memory regions inside this area. It doesn't mean the final
> result of KASLR, or any typical case of them.
> 
> vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
> vaddr_end = CPU_ENTRY_AREA_BASE;

Ok.

> > > With this superfluous address space as well as changing the starting address
> > > of each memory region to be PUD level, namely 1 GB aligned, we can have
> > > thousands of candidate position to locate those three memory regions.
> > 
> > Would be nice provide the number of bits randomized, maximum, from which the number of GBs of 
> > physical RAM has to be subtracted.
> > 
> > Because 'thousands' of randomization targets is *excessively* poor randomization - caused by 
> > the ridiculously high rounding to 1GB. It would be _very_ nice to extend randomization to at 
> > least 2MB boundaries instead. (If the half cacheline of PTE entries possibly 'wasted' is an 
> > issue we could increase that to 128 MB, but should start with 2MB first.)
> > 
> > That would instantly multiply the randomization selection by 512 ...
> 
> This may involve critical code changes. E.g in below commit, when we
> copy page table, we just need go deep into PUD level since PAGE_OFFSET
> is PUD_SIZE aligned, now if 2M aligned, we need deep into PMD level. I
> can only think of this about this issue. Surely, I can do more
> investigation and see what need be done to achieve the goal.

Yeah, would be nice to do, it would also test the robustness of all this code.
Obviously done after the cleanups, fixes and simpler changes.

It would be a _really_ nice feature adding about 9 bits of absolute randomization and 3x9 bits 
of relative randomization between the ranges.

> Sure, will do according to your suggestion.

Thanks!!

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region
  2018-09-12  6:31                 ` Ingo Molnar
  2018-09-12  9:41                   ` Baoquan He
@ 2018-09-21  2:10                   ` Baoquan He
  1 sibling, 0 replies; 18+ messages in thread
From: Baoquan He @ 2018-09-21  2:10 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: tglx, hpa, thgarnie, kirill.shutemov, x86, linux-kernel,
	Peter Zijlstra, Kees Cook

On 09/12/18 at 08:31am, Ingo Molnar wrote:
> Would you like to work on this? These would be really nice additions, once the code is cleaned 
> up to be maintainable and the pending bug fixes you have are merged.
> 
> In terms of patch logistics I'd suggest this ordering:
> 
>  - documentation fixes
>  - simple cleanups
>  - fixes
>  - enhancements

Sorry, there were some RHEL kernel issues last week, so started this
since yesterday. I have sent out documentation fixes as you suggested in
the first patch series. Next is simple cleanup.

> 
> With no more than ~5 patches sent in a series. Feel free to integrate all pending 
> boot-memory-map fixes and features as well, we'll figure out the right way to do them as they 
> happen - but let's start with the simple stuff first, ok?
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-09-21  2:10 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-09 12:49 [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Baoquan He
2018-09-09 12:49 ` [PATCH v2 2/3] x86/mm/KASLR: Calculate the actual size of vmemmap region Baoquan He
2018-09-10  6:11   ` Ingo Molnar
2018-09-11  7:30     ` Baoquan He
2018-09-11  7:59       ` Ingo Molnar
2018-09-11  8:18         ` Baoquan He
2018-09-11  9:28           ` Ingo Molnar
2018-09-11 12:08             ` Baoquan He
2018-09-12  3:18               ` Baoquan He
2018-09-12  6:31                 ` Ingo Molnar
2018-09-12  9:41                   ` Baoquan He
2018-09-12 10:01                     ` Ingo Molnar
2018-09-21  2:10                   ` Baoquan He
2018-09-09 12:49 ` [PATCH v2 3/3] mm: Add build time sanity chcek for struct page size Baoquan He
2018-09-10 13:41   ` kbuild test robot
2018-09-11  7:47     ` Baoquan He
2018-09-10  6:18 ` [PATCH v2 1/3] x86/mm/KASLR: Fix the wrong calculation of kalsr region initial size Ingo Molnar
2018-09-11  7:22   ` Baoquan He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).