linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level
@ 2019-02-24 13:22 Baoquan He
  2019-02-24 13:22 ` [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline Baoquan He
  2019-02-24 13:22 ` [PATCH 2/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
  0 siblings, 2 replies; 5+ messages in thread
From: Baoquan He @ 2019-02-24 13:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: dave.hansen, luto, peterz, tglx, mingo, bp, hpa, x86,
	kirill.shutemov, keescook, thgarnie, Baoquan He

Background:
***
Earlier, during a series of KASLR patch reviewing, Ingo got the current
memory region KASLR only has granularity of randomization in PUD size in
4-level paging mode, and P4D size in 5-level paging mode, He suggested
me to try to change both of them to be PMD size at granularity:

  http://lkml.kernel.org/r/20180912100135.GB3333@gmail.com

Later, I changed code to support PMD level of randomization for both
4-level and 5-level.

  https://github.com/baoquan-he/linux/commits/mm-kaslr-2m-aligned

The test passed on my KVM guest with 1 GB RAM, but failed when I
increased the RAM to 4 GB, and failed either on larger RAM.

After analyzing, it's because that 1 GB page mapping need be mapped at 1
GB aligned physical address for intel CPU. The 2 MB level of randomization
will break it and cause error. Please check below table in intel IA32 manual.

  Table 4-15. Format of an IA-32e Page-Directory-Pointer-Table Entry (PDPTE) that Maps a 1-GByte Page

So PMD level of randomization for mm KASLR is not doable.

However, during investigation and testing above code, it turns out that the
current code is misleading to build identity mapping for the real mode
trampoline in case KASLR enabled. From code, only a small area (which is
smaller than 1 MB) need be identity mapped. Please check below patch which
is from above mm-kaslr-2m-aligned patch series. it only builds up 2 MB
identity maping for real mode trampoline, and test passed on machines
with 32 GB RAM of 4-level and on KVM guest of 5-level. 

https://github.com/baoquan-he/linux/commit/e120e67fbf9a5aa818d20084d8dea5b4a27ecf97

Result:
Make a patchset to:
  1)change code to only build 1 GB of area for real mode trampoline,
    namely only copy one PUD entry where physical address 0 resides;

  2)improve the randomization granularity of 5-level from P4D size to PUD size.


Baoquan He (2):
  x86/mm/KASLR: Only build one PUD entry of area for real mode
    trampoline
  x86/mm/KASLR: Change the granularity of randomization to PUD size in
    5-level

 arch/x86/mm/kaslr.c | 82 +++++++++++++++++----------------------------
 1 file changed, 30 insertions(+), 52 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline
  2019-02-24 13:22 [PATCH 0/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
@ 2019-02-24 13:22 ` Baoquan He
  2019-02-25 12:31   ` Kirill A. Shutemov
  2019-02-24 13:22 ` [PATCH 2/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
  1 sibling, 1 reply; 5+ messages in thread
From: Baoquan He @ 2019-02-24 13:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: dave.hansen, luto, peterz, tglx, mingo, bp, hpa, x86,
	kirill.shutemov, keescook, thgarnie, Baoquan He

The current code builds identity mapping for real mode treampoline by
borrowing page tables from the direct mapping section if KASLR is
enabled. It will copy present entries of the first PUD table in 4-level
paging mode, or the first P4D table in 5-level paging mode.

However, there's only a very small area under low 1 MB reserved
for real mode trampoline in reserve_real_mode(). Makes no sense
to build up so large area of mapping for it. Since the randomization
granularity in 4-level is 1 GB, and 512 GB in 5-level, only copying
one PUD entry is enough.

Hence, only copy one PUD entry of area where physical address 0
resides. And this is preparation for later changing the randomization
granularity of 5-level paging mode from 512 GB to 1 GB.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 72 ++++++++++++++++++---------------------------
 1 file changed, 28 insertions(+), 44 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 754b5da91d43..6b2a06c36b6f 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -226,74 +226,58 @@ void __init kernel_randomize_memory(void)
 
 static void __meminit init_trampoline_pud(void)
 {
-	unsigned long paddr, paddr_next;
+	unsigned long paddr, vaddr;
 	pgd_t *pgd;
-	pud_t *pud_page, *pud_page_tramp;
-	int i;
 
+	p4d_t *p4d_page, *p4d_page_tramp, *p4d, *p4d_tramp;
+	pud_t *pud_page, *pud_page_tramp, *pud, *pud_tramp;
+
+
+	p4d_page_tramp = alloc_low_page();
 	pud_page_tramp = alloc_low_page();
 
 	paddr = 0;
+	vaddr = (unsigned long)__va(paddr);
 	pgd = pgd_offset_k((unsigned long)__va(paddr));
-	pud_page = (pud_t *) pgd_page_vaddr(*pgd);
 
-	for (i = pud_index(paddr); i < PTRS_PER_PUD; i++, paddr = paddr_next) {
-		pud_t *pud, *pud_tramp;
-		unsigned long vaddr = (unsigned long)__va(paddr);
+	if (pgtable_l5_enabled()) {
+		p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
+		p4d = p4d_page + p4d_index(vaddr);
 
-		pud_tramp = pud_page_tramp + pud_index(paddr);
+		pud_page = (pud_t *) p4d_page_vaddr(*p4d);
 		pud = pud_page + pud_index(vaddr);
-		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
-
-		*pud_tramp = *pud;
-	}
-
-	set_pgd(&trampoline_pgd_entry,
-		__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
-}
-
-static void __meminit init_trampoline_p4d(void)
-{
-	unsigned long paddr, paddr_next;
-	pgd_t *pgd;
-	p4d_t *p4d_page, *p4d_page_tramp;
-	int i;
 
-	p4d_page_tramp = alloc_low_page();
+		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
+		pud_tramp = pud_page_tramp + pud_index(paddr);
 
-	paddr = 0;
-	pgd = pgd_offset_k((unsigned long)__va(paddr));
-	p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
+		*pud_tramp = *pud;
 
-	for (i = p4d_index(paddr); i < PTRS_PER_P4D; i++, paddr = paddr_next) {
-		p4d_t *p4d, *p4d_tramp;
-		unsigned long vaddr = (unsigned long)__va(paddr);
+		set_p4d(p4d_tramp,
+			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
 
-		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
-		p4d = p4d_page + p4d_index(vaddr);
-		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
+		set_pgd(&trampoline_pgd_entry,
+			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
+	} else {
+		pud_page = (pud_t *) pgd_page_vaddr(*pgd);
+		pud = pud_page + pud_index(vaddr);
 
-		*p4d_tramp = *p4d;
+		pud_tramp = pud_page_tramp + pud_index(paddr);
+		*pud_tramp = *pud;
+		set_pgd(&trampoline_pgd_entry,
+			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
 	}
-
-	set_pgd(&trampoline_pgd_entry,
-		__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
 }
 
 /*
- * Create PGD aligned trampoline table to allow real mode initialization
- * of additional CPUs. Consume only 1 low memory page.
+ * Create PUD aligned trampoline table to allow real mode initialization
+ * of additional CPUs. Consume only 1 or 2 low memory pages.
  */
 void __meminit init_trampoline(void)
 {
-
 	if (!kaslr_memory_enabled()) {
 		init_trampoline_default();
 		return;
 	}
 
-	if (pgtable_l5_enabled())
-		init_trampoline_p4d();
-	else
-		init_trampoline_pud();
+	init_trampoline_pud();
 }
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level
  2019-02-24 13:22 [PATCH 0/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
  2019-02-24 13:22 ` [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline Baoquan He
@ 2019-02-24 13:22 ` Baoquan He
  1 sibling, 0 replies; 5+ messages in thread
From: Baoquan He @ 2019-02-24 13:22 UTC (permalink / raw)
  To: linux-kernel
  Cc: dave.hansen, luto, peterz, tglx, mingo, bp, hpa, x86,
	kirill.shutemov, keescook, thgarnie, Baoquan He

The current randomization granularity of 5-level is 512 GB. Improve
it to 1 GB. This can add more randomness to memory region KASLR in
5-level paging mode

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 6b2a06c36b6f..248509986f1f 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -204,10 +204,7 @@ void __init kernel_randomize_memory(void)
 		 */
 		entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
 		prandom_bytes_state(&rand_state, &rand, sizeof(rand));
-		if (pgtable_l5_enabled())
-			entropy = (rand % (entropy + 1)) & P4D_MASK;
-		else
-			entropy = (rand % (entropy + 1)) & PUD_MASK;
+		entropy = (rand % (entropy + 1)) & PUD_MASK;
 		vaddr += entropy;
 		*kaslr_regions[i].base = vaddr;
 
@@ -216,10 +213,7 @@ void __init kernel_randomize_memory(void)
 		 * randomization alignment.
 		 */
 		vaddr += kaslr_regions[i].size_tb << TB_SHIFT;
-		if (pgtable_l5_enabled())
-			vaddr = round_up(vaddr + 1, P4D_SIZE);
-		else
-			vaddr = round_up(vaddr + 1, PUD_SIZE);
+		vaddr = round_up(vaddr + 1, PUD_SIZE);
 		remain_entropy -= entropy;
 	}
 }
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline
  2019-02-24 13:22 ` [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline Baoquan He
@ 2019-02-25 12:31   ` Kirill A. Shutemov
  2019-02-25 13:20     ` Baoquan He
  0 siblings, 1 reply; 5+ messages in thread
From: Kirill A. Shutemov @ 2019-02-25 12:31 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, dave.hansen, luto, peterz, tglx, mingo, bp, hpa,
	x86, kirill.shutemov, keescook, thgarnie

On Sun, Feb 24, 2019 at 09:22:30PM +0800, Baoquan He wrote:
> The current code builds identity mapping for real mode treampoline by
> borrowing page tables from the direct mapping section if KASLR is
> enabled. It will copy present entries of the first PUD table in 4-level
> paging mode, or the first P4D table in 5-level paging mode.
> 
> However, there's only a very small area under low 1 MB reserved
> for real mode trampoline in reserve_real_mode(). Makes no sense
> to build up so large area of mapping for it. Since the randomization
> granularity in 4-level is 1 GB, and 512 GB in 5-level, only copying
> one PUD entry is enough.

Can we get more of this info into comments in code?

> Hence, only copy one PUD entry of area where physical address 0
> resides. And this is preparation for later changing the randomization
> granularity of 5-level paging mode from 512 GB to 1 GB.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  arch/x86/mm/kaslr.c | 72 ++++++++++++++++++---------------------------
>  1 file changed, 28 insertions(+), 44 deletions(-)
> 
> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 754b5da91d43..6b2a06c36b6f 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -226,74 +226,58 @@ void __init kernel_randomize_memory(void)
>  
>  static void __meminit init_trampoline_pud(void)
>  {
> -	unsigned long paddr, paddr_next;
> +	unsigned long paddr, vaddr;
>  	pgd_t *pgd;
> -	pud_t *pud_page, *pud_page_tramp;
> -	int i;
>  
> +	p4d_t *p4d_page, *p4d_page_tramp, *p4d, *p4d_tramp;
> +	pud_t *pud_page, *pud_page_tramp, *pud, *pud_tramp;
> +
> +
> +	p4d_page_tramp = alloc_low_page();

I believe this line should be under

	if (pgtable_l5_enabled()) {

Right?

>  	pud_page_tramp = alloc_low_page();
>  
>  	paddr = 0;
> +	vaddr = (unsigned long)__va(paddr);
>  	pgd = pgd_offset_k((unsigned long)__va(paddr));
> -	pud_page = (pud_t *) pgd_page_vaddr(*pgd);
>  
> -	for (i = pud_index(paddr); i < PTRS_PER_PUD; i++, paddr = paddr_next) {
> -		pud_t *pud, *pud_tramp;
> -		unsigned long vaddr = (unsigned long)__va(paddr);
> +	if (pgtable_l5_enabled()) {
> +		p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> +		p4d = p4d_page + p4d_index(vaddr);
>  
> -		pud_tramp = pud_page_tramp + pud_index(paddr);
> +		pud_page = (pud_t *) p4d_page_vaddr(*p4d);
>  		pud = pud_page + pud_index(vaddr);
> -		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
> -
> -		*pud_tramp = *pud;
> -	}
> -
> -	set_pgd(&trampoline_pgd_entry,
> -		__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
> -}
> -
> -static void __meminit init_trampoline_p4d(void)
> -{
> -	unsigned long paddr, paddr_next;
> -	pgd_t *pgd;
> -	p4d_t *p4d_page, *p4d_page_tramp;
> -	int i;
>  
> -	p4d_page_tramp = alloc_low_page();
> +		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> +		pud_tramp = pud_page_tramp + pud_index(paddr);
>  
> -	paddr = 0;
> -	pgd = pgd_offset_k((unsigned long)__va(paddr));
> -	p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> +		*pud_tramp = *pud;
>  
> -	for (i = p4d_index(paddr); i < PTRS_PER_P4D; i++, paddr = paddr_next) {
> -		p4d_t *p4d, *p4d_tramp;
> -		unsigned long vaddr = (unsigned long)__va(paddr);
> +		set_p4d(p4d_tramp,
> +			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
>  
> -		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> -		p4d = p4d_page + p4d_index(vaddr);
> -		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
> +		set_pgd(&trampoline_pgd_entry,
> +			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
> +	} else {
> +		pud_page = (pud_t *) pgd_page_vaddr(*pgd);
> +		pud = pud_page + pud_index(vaddr);
>  
> -		*p4d_tramp = *p4d;
> +		pud_tramp = pud_page_tramp + pud_index(paddr);
> +		*pud_tramp = *pud;
> +		set_pgd(&trampoline_pgd_entry,
> +			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
>  	}
> -
> -	set_pgd(&trampoline_pgd_entry,
> -		__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
>  }
>  
>  /*
> - * Create PGD aligned trampoline table to allow real mode initialization
> - * of additional CPUs. Consume only 1 low memory page.
> + * Create PUD aligned trampoline table to allow real mode initialization
> + * of additional CPUs. Consume only 1 or 2 low memory pages.
>   */
>  void __meminit init_trampoline(void)
>  {
> -
>  	if (!kaslr_memory_enabled()) {
>  		init_trampoline_default();
>  		return;
>  	}
>  
> -	if (pgtable_l5_enabled())
> -		init_trampoline_p4d();
> -	else
> -		init_trampoline_pud();
> +	init_trampoline_pud();
>  }
> -- 
> 2.17.2
> 

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline
  2019-02-25 12:31   ` Kirill A. Shutemov
@ 2019-02-25 13:20     ` Baoquan He
  0 siblings, 0 replies; 5+ messages in thread
From: Baoquan He @ 2019-02-25 13:20 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: linux-kernel, dave.hansen, luto, peterz, tglx, mingo, bp, hpa,
	x86, kirill.shutemov, keescook, thgarnie

On 02/25/19 at 03:31pm, Kirill A. Shutemov wrote:
> On Sun, Feb 24, 2019 at 09:22:30PM +0800, Baoquan He wrote:
> > The current code builds identity mapping for real mode treampoline by
> > borrowing page tables from the direct mapping section if KASLR is
> > enabled. It will copy present entries of the first PUD table in 4-level
> > paging mode, or the first P4D table in 5-level paging mode.
> > 
> > However, there's only a very small area under low 1 MB reserved
> > for real mode trampoline in reserve_real_mode(). Makes no sense
> > to build up so large area of mapping for it. Since the randomization
> > granularity in 4-level is 1 GB, and 512 GB in 5-level, only copying
> > one PUD entry is enough.
> 
> Can we get more of this info into comments in code?

Sure, I will add this to above init_trampoline(). Thanks.

> 
> > Hence, only copy one PUD entry of area where physical address 0
> > resides. And this is preparation for later changing the randomization
> > granularity of 5-level paging mode from 512 GB to 1 GB.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  arch/x86/mm/kaslr.c | 72 ++++++++++++++++++---------------------------
> >  1 file changed, 28 insertions(+), 44 deletions(-)
> > 
> > diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> > index 754b5da91d43..6b2a06c36b6f 100644
> > --- a/arch/x86/mm/kaslr.c
> > +++ b/arch/x86/mm/kaslr.c
> > @@ -226,74 +226,58 @@ void __init kernel_randomize_memory(void)
> >  
> >  static void __meminit init_trampoline_pud(void)
> >  {
> > -	unsigned long paddr, paddr_next;
> > +	unsigned long paddr, vaddr;
> >  	pgd_t *pgd;
> > -	pud_t *pud_page, *pud_page_tramp;
> > -	int i;
> >  
> > +	p4d_t *p4d_page, *p4d_page_tramp, *p4d, *p4d_tramp;
> > +	pud_t *pud_page, *pud_page_tramp, *pud, *pud_tramp;
> > +
> > +
> > +	p4d_page_tramp = alloc_low_page();
> 
> I believe this line should be under
> 
> 	if (pgtable_l5_enabled()) {
> 
> Right?

Yeah, you are right. No need to waste one page in 4-level case.

Will see if there's any other comment, then repost to update.

Thanks
Baoquan

> 
> >  	pud_page_tramp = alloc_low_page();
> >  
> >  	paddr = 0;
> > +	vaddr = (unsigned long)__va(paddr);
> >  	pgd = pgd_offset_k((unsigned long)__va(paddr));
> > -	pud_page = (pud_t *) pgd_page_vaddr(*pgd);
> >  
> > -	for (i = pud_index(paddr); i < PTRS_PER_PUD; i++, paddr = paddr_next) {
> > -		pud_t *pud, *pud_tramp;
> > -		unsigned long vaddr = (unsigned long)__va(paddr);
> > +	if (pgtable_l5_enabled()) {
> > +		p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> > +		p4d = p4d_page + p4d_index(vaddr);
> >  
> > -		pud_tramp = pud_page_tramp + pud_index(paddr);
> > +		pud_page = (pud_t *) p4d_page_vaddr(*p4d);
> >  		pud = pud_page + pud_index(vaddr);
> > -		paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
> > -
> > -		*pud_tramp = *pud;
> > -	}
> > -
> > -	set_pgd(&trampoline_pgd_entry,
> > -		__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
> > -}
> > -
> > -static void __meminit init_trampoline_p4d(void)
> > -{
> > -	unsigned long paddr, paddr_next;
> > -	pgd_t *pgd;
> > -	p4d_t *p4d_page, *p4d_page_tramp;
> > -	int i;
> >  
> > -	p4d_page_tramp = alloc_low_page();
> > +		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> > +		pud_tramp = pud_page_tramp + pud_index(paddr);
> >  
> > -	paddr = 0;
> > -	pgd = pgd_offset_k((unsigned long)__va(paddr));
> > -	p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
> > +		*pud_tramp = *pud;
> >  
> > -	for (i = p4d_index(paddr); i < PTRS_PER_P4D; i++, paddr = paddr_next) {
> > -		p4d_t *p4d, *p4d_tramp;
> > -		unsigned long vaddr = (unsigned long)__va(paddr);
> > +		set_p4d(p4d_tramp,
> > +			__p4d(_KERNPG_TABLE | __pa(pud_page_tramp)));
> >  
> > -		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
> > -		p4d = p4d_page + p4d_index(vaddr);
> > -		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
> > +		set_pgd(&trampoline_pgd_entry,
> > +			__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
> > +	} else {
> > +		pud_page = (pud_t *) pgd_page_vaddr(*pgd);
> > +		pud = pud_page + pud_index(vaddr);
> >  
> > -		*p4d_tramp = *p4d;
> > +		pud_tramp = pud_page_tramp + pud_index(paddr);
> > +		*pud_tramp = *pud;
> > +		set_pgd(&trampoline_pgd_entry,
> > +			__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
> >  	}
> > -
> > -	set_pgd(&trampoline_pgd_entry,
> > -		__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
> >  }
> >  
> >  /*
> > - * Create PGD aligned trampoline table to allow real mode initialization
> > - * of additional CPUs. Consume only 1 low memory page.
> > + * Create PUD aligned trampoline table to allow real mode initialization
> > + * of additional CPUs. Consume only 1 or 2 low memory pages.
> >   */
> >  void __meminit init_trampoline(void)
> >  {
> > -
> >  	if (!kaslr_memory_enabled()) {
> >  		init_trampoline_default();
> >  		return;
> >  	}
> >  
> > -	if (pgtable_l5_enabled())
> > -		init_trampoline_p4d();
> > -	else
> > -		init_trampoline_pud();
> > +	init_trampoline_pud();
> >  }
> > -- 
> > 2.17.2
> > 
> 
> -- 
>  Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-02-25 13:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-24 13:22 [PATCH 0/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He
2019-02-24 13:22 ` [PATCH 1/2] x86/mm/KASLR: Only build one PUD entry of area for real mode trampoline Baoquan He
2019-02-25 12:31   ` Kirill A. Shutemov
2019-02-25 13:20     ` Baoquan He
2019-02-24 13:22 ` [PATCH 2/2] x86/mm/KASLR: Change the granularity of randomization to PUD size in 5-level Baoquan He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).