All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 RESEND 0/2] x86/mm/KASLR: Fix the wrong size of memory sections
@ 2019-04-14  7:28 Baoquan He
  2019-04-14  7:28 ` [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section Baoquan He
  2019-04-14  7:28 ` [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section Baoquan He
  0 siblings, 2 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-14  7:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, tglx, mingo, bp, hpa, kirill, keescook, peterz, thgarnie,
	herbert, mike.travis, frank.ramsay, yamada.masahiro, Baoquan He

Resend:
  Fine tuning the patch log.

v1->v2:
  Rewrite log of the two patches. No new code change.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
v1 background:
The fixes for these two bugs were carried in the earlier patchset, patch
4/6 and patch 5/6:

[PATCH v4 0/6] Several patches to fix code bugs, improve documents and clean up
http://lkml.kernel.org/r/20190314094645.4883-1-bhe@redhat.com

Later, Thomas suggested posting bug fixing patches separately from those
clean up patches. So send both of them in a separate patchset.

For another bug fix patch 6/6, it happened on SGI UV system. Mike and
Frank have sent a machine with cards to our lab and loaned to me, I am
still debugging and discussing with Mike about the verification.



Baoquan He (2):
  x86/mm/KASLR: Fix the size of the direct mapping section
  x86/mm/KASLR: Fix the size of vmemmap section

 arch/x86/mm/kaslr.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

-- 
2.17.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-14  7:28 [PATCH v2 RESEND 0/2] x86/mm/KASLR: Fix the wrong size of memory sections Baoquan He
@ 2019-04-14  7:28 ` Baoquan He
  2019-04-15 18:53   ` Borislav Petkov
  2019-04-14  7:28 ` [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section Baoquan He
  1 sibling, 1 reply; 18+ messages in thread
From: Baoquan He @ 2019-04-14  7:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, tglx, mingo, bp, hpa, kirill, keescook, peterz, thgarnie,
	herbert, mike.travis, frank.ramsay, yamada.masahiro, Baoquan He

kernel_randomize_memory() uses __PHYSICAL_MASK_SHIFT to calculate
the maximum amount of system RAM supported. The size of the direct
mapping section is obtained from the smaller one of the below two
values:

 (actual system RAM size + padding size) vs (max system RAM size supported)

This calculation is wrong since commit:
b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52").

In commit b83ce5ee91471d, __PHYSICAL_MASK_SHIFT was changed to be 52,
regardless of whether it's using 4-level or 5-level page tables.
It will always use 4 PB as the maximum amount of system RAM, even
in 4-level paging mode where it should be 64 TB.  Thus the size of
the direct mapping section will always be the sum of the actual
system RAM size plus the padding size.

Even when the amount of system RAM is 64 TB, the following layout will
still be used. Obviously KALSR will be weakened significantly.

   |____|_______actual RAM_______|_padding_|______the rest_______|
   0            64TB                                            ~120TB

What we want is the following:

   |____|_______actual RAM_______|_________the rest______________|
   0            64TB                                            ~120TB

So the code should use MAX_PHYSMEM_BITS instead. Fix it by replacing
__PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

Fixes: b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52")
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 9a8756517504..387d4ed25d7c 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -94,7 +94,7 @@ void __init kernel_randomize_memory(void)
 	if (!kaslr_memory_enabled())
 		return;
 
-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
+	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
 
 	/*
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-14  7:28 [PATCH v2 RESEND 0/2] x86/mm/KASLR: Fix the wrong size of memory sections Baoquan He
  2019-04-14  7:28 ` [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section Baoquan He
@ 2019-04-14  7:28 ` Baoquan He
  2019-04-15 19:47   ` Borislav Petkov
  2019-04-22  9:10   ` [PATCH v3 " Baoquan He
  1 sibling, 2 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-14  7:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, tglx, mingo, bp, hpa, kirill, keescook, peterz, thgarnie,
	herbert, mike.travis, frank.ramsay, yamada.masahiro, Baoquan He

kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
to support the maximum amount of system RAM in 4-level paging mode, 64 TB.

However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
stamping into the following cpu_entry_area section, if KASLR puts vmemmap
very close to cpu_entry_area, and the actual area of vmemmap is much bigger
than 1 TB.

So here calculate the actual size of vmemmap region, then align up to 1 TB
boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
the left space can be saved to join randomization to increase the entropy.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 387d4ed25d7c..4679a0075048 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
 } kaslr_regions[] = {
 	{ &page_offset_base, 0 },
 	{ &vmalloc_base, 0 },
-	{ &vmemmap_base, 1 },
+	{ &vmemmap_base, 0 },
 };
 
 /* Get size in bytes used by the memory region */
@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
 	unsigned long rand, memory_tb;
 	struct rnd_state rand_state;
 	unsigned long remain_entropy;
+	unsigned long vmemmap_size;
 
 	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
 	vaddr = vaddr_start;
@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
 	if (memory_tb < kaslr_regions[0].size_tb)
 		kaslr_regions[0].size_tb = memory_tb;
 
+	/**
+	 * Calculate how many TB vmemmap region needs, and aligned to
+	 * 1TB boundary.
+	 */
+	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
+		sizeof(struct page);
+	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
+
 	/* Calculate entropy available between regions */
 	remain_entropy = vaddr_end - vaddr_start;
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-14  7:28 ` [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section Baoquan He
@ 2019-04-15 18:53   ` Borislav Petkov
  2019-04-17  8:35     ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2019-04-15 18:53 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, x86, tglx, mingo, hpa, kirill, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On Sun, Apr 14, 2019 at 03:28:03PM +0800, Baoquan He wrote:
> kernel_randomize_memory() uses __PHYSICAL_MASK_SHIFT to calculate
> the maximum amount of system RAM supported. The size of the direct
> mapping section is obtained from the smaller one of the below two
> values:
> 
>  (actual system RAM size + padding size) vs (max system RAM size supported)
> 
> This calculation is wrong since commit:
> b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52").
> 
> In commit b83ce5ee91471d, __PHYSICAL_MASK_SHIFT was changed to be 52,
> regardless of whether it's using 4-level or 5-level page tables.
> It will always use 4 PB as the maximum amount of system RAM, even
> in 4-level paging mode where it should be 64 TB.  Thus the size of
> the direct mapping section will always be the sum of the actual
> system RAM size plus the padding size.
> 
> Even when the amount of system RAM is 64 TB, the following layout will
> still be used. Obviously KALSR will be weakened significantly.
> 
>    |____|_______actual RAM_______|_padding_|______the rest_______|
>    0            64TB                                            ~120TB
> 
> What we want is the following:
> 
>    |____|_______actual RAM_______|_________the rest______________|
>    0            64TB                                            ~120TB
> 
> So the code should use MAX_PHYSMEM_BITS instead. Fix it by replacing
> __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

First of all, wonderful job!

This changelog is *light* *years* away from what you had before so keep
doing them this detailed and on point from now on!

Now, lemme make sure I understand exactly what you're fixing here:
you're fixing the case where CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING is
not 0. Which is the case when CONFIG_MEMORY_HOTPLUG is enabled.

Yes, no?

If so, please extend the commit message with that fact because it is
crucial and the last missing piece in the explanation.

Otherwise, when the padding is 0, the clamping:

        /* Adapt phyiscal memory region size based on available memory */
        if (memory_tb < kaslr_regions[0].size_tb)
                kaslr_regions[0].size_tb = memory_tb;

will "fix" the direct mapping section size.

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-14  7:28 ` [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section Baoquan He
@ 2019-04-15 19:47   ` Borislav Petkov
  2019-04-17  8:39     ` Baoquan He
  2019-04-26  9:23     ` Baoquan He
  2019-04-22  9:10   ` [PATCH v3 " Baoquan He
  1 sibling, 2 replies; 18+ messages in thread
From: Borislav Petkov @ 2019-04-15 19:47 UTC (permalink / raw)
  To: Baoquan He, kirill
  Cc: linux-kernel, x86, tglx, mingo, hpa, keescook, peterz, thgarnie,
	herbert, mike.travis, frank.ramsay, yamada.masahiro

On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> 
> However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> than 1 TB.
> 
> So here calculate the actual size of vmemmap region, then align up to 1 TB
> boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
> The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
								       ^^^^^^^

Please introduce a spellchecker into your patch creation workflow.

> the left space can be saved to join randomization to increase the entropy.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
>  arch/x86/mm/kaslr.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 387d4ed25d7c..4679a0075048 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
>  } kaslr_regions[] = {
>  	{ &page_offset_base, 0 },
>  	{ &vmalloc_base, 0 },
> -	{ &vmemmap_base, 1 },
> +	{ &vmemmap_base, 0 },
>  };
>  
>  /* Get size in bytes used by the memory region */
> @@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
>  	unsigned long rand, memory_tb;
>  	struct rnd_state rand_state;
>  	unsigned long remain_entropy;
> +	unsigned long vmemmap_size;
>  
>  	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
>  	vaddr = vaddr_start;
> @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
>  	if (memory_tb < kaslr_regions[0].size_tb)
>  		kaslr_regions[0].size_tb = memory_tb;
>  
> +	/**
> +	 * Calculate how many TB vmemmap region needs, and aligned to
> +	 * 1TB boundary.
> +	 */
> +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> +		sizeof(struct page);
> +	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> +
>  	/* Calculate entropy available between regions */
>  	remain_entropy = vaddr_end - vaddr_start;
>  	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
> -- 

Kirill, ack?

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-15 18:53   ` Borislav Petkov
@ 2019-04-17  8:35     ` Baoquan He
  2019-04-17 15:01       ` Borislav Petkov
  2019-04-18  8:52       ` [tip:x86/urgent] " tip-bot for Baoquan He
  0 siblings, 2 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-17  8:35 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, x86, tglx, mingo, hpa, kirill, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On 04/15/19 at 08:53pm, Borislav Petkov wrote:
> Now, lemme make sure I understand exactly what you're fixing here:
> you're fixing the case where CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING is
> not 0. Which is the case when CONFIG_MEMORY_HOTPLUG is enabled.
> 
> Yes, no?

Yes, the padding is reserved specifically for possible furture memory
hotplugging.
> 
> If so, please extend the commit message with that fact because it is
> crucial and the last missing piece in the explanation.
> 
> Otherwise, when the padding is 0, the clamping:
> 
>         /* Adapt phyiscal memory region size based on available memory */
>         if (memory_tb < kaslr_regions[0].size_tb)
>                 kaslr_regions[0].size_tb = memory_tb;
> 
> will "fix" the direct mapping section size.

I made a new one to add this fact, I can repost if it's OK to you.
Thanks.

From 6f0fdb9df6acdcd42b8cbdecaf5058c3090fd577 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@redhat.com>
Date: Thu, 4 Apr 2019 10:03:13 +0800
Subject: [PATCH] x86/mm/KASLR: Fix the size of the direct mapping section

kernel_randomize_memory() uses __PHYSICAL_MASK_SHIFT to calculate
the maximum amount of system RAM supported. The size of the direct
mapping section is obtained from the smaller one of the below two
values:

 (actual system RAM size + padding size) vs (max system RAM size supported)

This calculation is wrong since commit:
b83ce5ee91471d ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52").

In commit b83ce5ee91471d, __PHYSICAL_MASK_SHIFT was changed to be 52,
regardless of whether it's using 4-level or 5-level page tables.
It will always use 4 PB as the maximum amount of system RAM, even
in 4-level paging mode where it should be 64 TB.  Thus the size of
the direct mapping section will always be the sum of the actual
system RAM size plus the padding size.

Even when the amount of system RAM is 64 TB, the following layout will
still be used. Obviously KALSR will be weakened significantly.

   |____|_______actual RAM_______|_padding_|______the rest_______|
   0            64TB                                            ~120TB

What we want is the following:

   |____|_______actual RAM_______|_________the rest______________|
   0            64TB                                            ~120TB

Here, the size of padding region can be configured with
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING, 10 TB by default. The above
issue only exists when CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING is set
to a non-zero value. Otherwise, using __PHYSICAL_MASK_SHIFT doesn't
affect KASLR either.

So the code should use MAX_PHYSMEM_BITS instead. Fix it by replacing
__PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

Fixes: b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52")
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Baoquan He <bhe@redhat.com>
---
 arch/x86/mm/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 78974ee5d97f..4679a0075048 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -95,7 +95,7 @@ void __init kernel_randomize_memory(void)
 	if (!kaslr_memory_enabled())
 		return;
 
-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
+	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
 
 	/*
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-15 19:47   ` Borislav Petkov
@ 2019-04-17  8:39     ` Baoquan He
  2019-04-26  9:23     ` Baoquan He
  1 sibling, 0 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-17  8:39 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: kirill, linux-kernel, x86, tglx, mingo, hpa, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On 04/15/19 at 09:47pm, Borislav Petkov wrote:
> On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> > kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> > to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> > 
> > However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> > the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> > 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> > stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> > very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> > than 1 TB.
> > 
> > So here calculate the actual size of vmemmap region, then align up to 1 TB
> > boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
> > The current code reserves 0.5 PB for vmemmap in 5-level. In this new methor,
> 								       ^^^^^^^
> 
> Please introduce a spellchecker into your patch creation workflow.

Sorry, forgot running checkpatch this time. Will update.

> 
> > the left space can be saved to join randomization to increase the entropy.
> > 
> > Signed-off-by: Baoquan He <bhe@redhat.com>
> > ---
> >  arch/x86/mm/kaslr.c | 11 ++++++++++-
> >  1 file changed, 10 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> > index 387d4ed25d7c..4679a0075048 100644
> > --- a/arch/x86/mm/kaslr.c
> > +++ b/arch/x86/mm/kaslr.c
> > @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
> >  } kaslr_regions[] = {
> >  	{ &page_offset_base, 0 },
> >  	{ &vmalloc_base, 0 },
> > -	{ &vmemmap_base, 1 },
> > +	{ &vmemmap_base, 0 },
> >  };
> >  
> >  /* Get size in bytes used by the memory region */
> > @@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
> >  	unsigned long rand, memory_tb;
> >  	struct rnd_state rand_state;
> >  	unsigned long remain_entropy;
> > +	unsigned long vmemmap_size;
> >  
> >  	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
> >  	vaddr = vaddr_start;
> > @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
> >  	if (memory_tb < kaslr_regions[0].size_tb)
> >  		kaslr_regions[0].size_tb = memory_tb;
> >  
> > +	/**
> > +	 * Calculate how many TB vmemmap region needs, and aligned to
> > +	 * 1TB boundary.
> > +	 */
> > +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> > +		sizeof(struct page);
> > +	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> > +
> >  	/* Calculate entropy available between regions */
> >  	remain_entropy = vaddr_end - vaddr_start;
> >  	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
> > -- 
> 
> Kirill, ack?
> 
> -- 
> Regards/Gruss,
>     Boris.
> 
> Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-17  8:35     ` Baoquan He
@ 2019-04-17 15:01       ` Borislav Petkov
  2019-04-17 22:42         ` Baoquan He
  2019-04-18  8:52       ` [tip:x86/urgent] " tip-bot for Baoquan He
  1 sibling, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2019-04-17 15:01 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, x86, tglx, mingo, hpa, kirill, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On Wed, Apr 17, 2019 at 04:35:36PM +0800, Baoquan He wrote:
> I made a new one to add this fact, I can repost if it's OK to you.

No, it looks ok and I can take it from here.

Also, resending too often is annoying, as I'm sure you know. Try to
stick to resending once a week.

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-17 15:01       ` Borislav Petkov
@ 2019-04-17 22:42         ` Baoquan He
  0 siblings, 0 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-17 22:42 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: linux-kernel, x86, tglx, mingo, hpa, kirill, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On 04/17/19 at 05:01pm, Borislav Petkov wrote:
> On Wed, Apr 17, 2019 at 04:35:36PM +0800, Baoquan He wrote:
> > I made a new one to add this fact, I can repost if it's OK to you.
> 
> No, it looks ok and I can take it from here.
> 
> Also, resending too often is annoying, as I'm sure you know. Try to
> stick to resending once a week.

OK, thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [tip:x86/urgent] x86/mm/KASLR: Fix the size of the direct mapping section
  2019-04-17  8:35     ` Baoquan He
  2019-04-17 15:01       ` Borislav Petkov
@ 2019-04-18  8:52       ` tip-bot for Baoquan He
  1 sibling, 0 replies; 18+ messages in thread
From: tip-bot for Baoquan He @ 2019-04-18  8:52 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, thgarnie, dave.hansen, bhe, x86, tglx, kirill.shutemov, luto,
	peterz, linux-kernel, hpa, mingo, keescook

Commit-ID:  ec3937107ab43f3e8b2bc9dad95710043c462ff7
Gitweb:     https://git.kernel.org/tip/ec3937107ab43f3e8b2bc9dad95710043c462ff7
Author:     Baoquan He <bhe@redhat.com>
AuthorDate: Thu, 4 Apr 2019 10:03:13 +0800
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Thu, 18 Apr 2019 10:42:58 +0200

x86/mm/KASLR: Fix the size of the direct mapping section

kernel_randomize_memory() uses __PHYSICAL_MASK_SHIFT to calculate
the maximum amount of system RAM supported. The size of the direct
mapping section is obtained from the smaller one of the below two
values:

  (actual system RAM size + padding size) vs (max system RAM size supported)

This calculation is wrong since commit

  b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52").

In it, __PHYSICAL_MASK_SHIFT was changed to be 52, regardless of whether
the kernel is using 4-level or 5-level page tables. Thus, it will always
use 4 PB as the maximum amount of system RAM, even in 4-level paging
mode where it should actually be 64 TB.

Thus, the size of the direct mapping section will always
be the sum of the actual system RAM size plus the padding size.

Even when the amount of system RAM is 64 TB, the following layout will
still be used. Obviously KALSR will be weakened significantly.

   |____|_______actual RAM_______|_padding_|______the rest_______|
   0            64TB                                            ~120TB

Instead, it should be like this:

   |____|_______actual RAM_______|_________the rest______________|
   0            64TB                                            ~120TB

The size of padding region is controlled by
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING, which is 10 TB by default.

The above issue only exists when
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING is set to a non-zero value,
which is the case when CONFIG_MEMORY_HOTPLUG is enabled. Otherwise,
using __PHYSICAL_MASK_SHIFT doesn't affect KASLR.

Fix it by replacing __PHYSICAL_MASK_SHIFT with MAX_PHYSMEM_BITS.

 [ bp: Massage commit message. ]

Fixes: b83ce5ee9147 ("x86/mm/64: Make __PHYSICAL_MASK_SHIFT always 52")
Signed-off-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Thomas Garnier <thgarnie@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: frank.ramsay@hpe.com
Cc: herbert@gondor.apana.org.au
Cc: kirill@shutemov.name
Cc: mike.travis@hpe.com
Cc: thgarnie@google.com
Cc: x86-ml <x86@kernel.org>
Cc: yamada.masahiro@socionext.com
Link: https://lkml.kernel.org/r/20190417083536.GE7065@MiWiFi-R3L-srv
---
 arch/x86/mm/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 3f452ffed7e9..d669c5e797e0 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -94,7 +94,7 @@ void __init kernel_randomize_memory(void)
 	if (!kaslr_memory_enabled())
 		return;
 
-	kaslr_regions[0].size_tb = 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT);
+	kaslr_regions[0].size_tb = 1 << (MAX_PHYSMEM_BITS - TB_SHIFT);
 	kaslr_regions[1].size_tb = VMALLOC_SIZE_TB;
 
 	/*

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-14  7:28 ` [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section Baoquan He
  2019-04-15 19:47   ` Borislav Petkov
@ 2019-04-22  9:10   ` Baoquan He
  2019-04-22  9:14     ` Baoquan He
  2019-04-28 18:54     ` Kirill A. Shutemov
  1 sibling, 2 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-22  9:10 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, tglx, mingo, bp, hpa, kirill.shutemov, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
to support the maximum amount of system RAM in 4-level paging mode, 64 TB.

However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
stamping into the following cpu_entry_area section, if KASLR puts vmemmap
very close to cpu_entry_area , and the actual area of vmemmap is much bigger
than 1 TB.

So here calculate the actual size of vmemmap region, then align up to 1 TB
boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
The current code reserves 0.5 PB for vmemmap in 5-level. In this new method,
the left space can be saved to join randomization to increase the entropy.

Signed-off-by: Baoquan He <bhe@redhat.com>
---
v2->v3:
  Fix typo Boris pointed out. 

 arch/x86/mm/kaslr.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index 387d4ed25d7c..4679a0075048 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
 } kaslr_regions[] = {
 	{ &page_offset_base, 0 },
 	{ &vmalloc_base, 0 },
-	{ &vmemmap_base, 1 },
+	{ &vmemmap_base, 0 },
 };
 
 /* Get size in bytes used by the memory region */
@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
 	unsigned long rand, memory_tb;
 	struct rnd_state rand_state;
 	unsigned long remain_entropy;
+	unsigned long vmemmap_size;
 
 	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
 	vaddr = vaddr_start;
@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
 	if (memory_tb < kaslr_regions[0].size_tb)
 		kaslr_regions[0].size_tb = memory_tb;
 
+	/**
+	 * Calculate how many TB vmemmap region needs, and aligned to
+	 * 1TB boundary.
+	 */
+	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
+		sizeof(struct page);
+	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
+
 	/* Calculate entropy available between regions */
 	remain_entropy = vaddr_end - vaddr_start;
 	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-22  9:10   ` [PATCH v3 " Baoquan He
@ 2019-04-22  9:14     ` Baoquan He
  2019-04-28 18:54     ` Kirill A. Shutemov
  1 sibling, 0 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-22  9:14 UTC (permalink / raw)
  To: linux-kernel, kirill.shutemov, keescook
  Cc: x86, tglx, mingo, bp, hpa, peterz, thgarnie, herbert,
	mike.travis, frank.ramsay, yamada.masahiro

Hi Kirill, Kees,

On 04/22/19 at 05:10pm, Baoquan He wrote:
> kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> to support the maximum amount of system RAM in 4-level paging mode, 64 TB.

Could you help review this one, and offer ack if it's OK to you?

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-15 19:47   ` Borislav Petkov
  2019-04-17  8:39     ` Baoquan He
@ 2019-04-26  9:23     ` Baoquan He
  2019-04-26 10:04       ` Borislav Petkov
  1 sibling, 1 reply; 18+ messages in thread
From: Baoquan He @ 2019-04-26  9:23 UTC (permalink / raw)
  To: Borislav Petkov, keescook
  Cc: kirill, linux-kernel, x86, tglx, mingo, hpa, keescook, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

Hi Boris,

On 04/15/19 at 09:47pm, Borislav Petkov wrote:
> On Sun, Apr 14, 2019 at 03:28:04PM +0800, Baoquan He wrote:
> > kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> > to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> > 
> > However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> > the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> > 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> > stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> > very close to cpu_entry_area, and the actual area of vmemmap is much bigger
> > than 1 TB.
 
> 
> Kirill, ack?

I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
could be busy with something else as he doesn't show up recently on
lkml.

Kees kindly replied, and said he couldn't find this mail thread. He told
I can add his Reviewed-by, as he has acked this patchset in v2
thread. I just updated later to tune log and correct typos.
http://lkml.kernel.org/r/CAGXu5j+o4aSx9mMDJqTMOp-VrvWes-2YEwR1f29z8dm0rUfzGQ@mail.gmail.com

Can this be picked into tip with Kees' ack?

Thanks
Baoquan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-26  9:23     ` Baoquan He
@ 2019-04-26 10:04       ` Borislav Petkov
  2019-04-26 10:18         ` Baoquan He
  0 siblings, 1 reply; 18+ messages in thread
From: Borislav Petkov @ 2019-04-26 10:04 UTC (permalink / raw)
  To: Baoquan He
  Cc: keescook, kirill, linux-kernel, x86, tglx, mingo, hpa, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On Fri, Apr 26, 2019 at 05:23:48PM +0800, Baoquan He wrote:
> I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
> could be busy with something else as he doesn't show up recently on
> lkml.

I don't understand what the hurry is?

The merge window is imminent and we only pick obvious fixes. That
doesn't qualify as such, AFAICT.

> Kees kindly replied, and said he couldn't find this mail thread. He told
> I can add his Reviewed-by, as he has acked this patchset in v2
> thread. I just updated later to tune log and correct typos.
> http://lkml.kernel.org/r/CAGXu5j+o4aSx9mMDJqTMOp-VrvWes-2YEwR1f29z8dm0rUfzGQ@mail.gmail.com

Yes, when you get Reviewed-by:'s or other tags from reviewers, you
*add* them to your next submission when the patch doesn't change in
non-trivial fashion. You should know that...

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-26 10:04       ` Borislav Petkov
@ 2019-04-26 10:18         ` Baoquan He
  0 siblings, 0 replies; 18+ messages in thread
From: Baoquan He @ 2019-04-26 10:18 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: keescook, kirill, linux-kernel, x86, tglx, mingo, hpa, peterz,
	thgarnie, herbert, mike.travis, frank.ramsay, yamada.masahiro

On 04/26/19 at 12:04pm, Borislav Petkov wrote:
> On Fri, Apr 26, 2019 at 05:23:48PM +0800, Baoquan He wrote:
> > I sent private mail to Kirill and Kees. Kirill haven't replied yet, he
> > could be busy with something else as he doesn't show up recently on
> > lkml.
> 
> I don't understand what the hurry is?
> 
> The merge window is imminent and we only pick obvious fixes. That
> doesn't qualify as such, AFAICT.

OK.

> 
> > Kees kindly replied, and said he couldn't find this mail thread. He told
> > I can add his Reviewed-by, as he has acked this patchset in v2
> > thread. I just updated later to tune log and correct typos.
> > http://lkml.kernel.org/r/CAGXu5j+o4aSx9mMDJqTMOp-VrvWes-2YEwR1f29z8dm0rUfzGQ@mail.gmail.com
> 
> Yes, when you get Reviewed-by:'s or other tags from reviewers, you
> *add* them to your next submission when the patch doesn't change in
> non-trivial fashion. You should know that...

OK, will remember it. Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-22  9:10   ` [PATCH v3 " Baoquan He
  2019-04-22  9:14     ` Baoquan He
@ 2019-04-28 18:54     ` Kirill A. Shutemov
  2019-04-29  8:12       ` Baoquan He
  1 sibling, 1 reply; 18+ messages in thread
From: Kirill A. Shutemov @ 2019-04-28 18:54 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, kirill.shutemov,
	keescook, peterz, thgarnie, herbert, mike.travis, frank.ramsay,
	yamada.masahiro

On Mon, Apr 22, 2019 at 05:10:45PM +0800, Baoquan He wrote:
> kernel_randomize_memory() hardcodes the size of vmemmap section as 1 TB,
> to support the maximum amount of system RAM in 4-level paging mode, 64 TB.
> 
> However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
> the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
> 64 TB of vmemmap area is needed. The wrong hardcoding may cause vmemmap
> stamping into the following cpu_entry_area section, if KASLR puts vmemmap
> very close to cpu_entry_area , and the actual area of vmemmap is much bigger
> than 1 TB.
> 
> So here calculate the actual size of vmemmap region, then align up to 1 TB
> boundary. In 4-level it's always 1 TB. In 5-level it's adjusted on demand.
> The current code reserves 0.5 PB for vmemmap in 5-level. In this new method,
> the left space can be saved to join randomization to increase the entropy.
> 
> Signed-off-by: Baoquan He <bhe@redhat.com>
> ---
> v2->v3:
>   Fix typo Boris pointed out. 
> 
>  arch/x86/mm/kaslr.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
> index 387d4ed25d7c..4679a0075048 100644
> --- a/arch/x86/mm/kaslr.c
> +++ b/arch/x86/mm/kaslr.c
> @@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_region {
>  } kaslr_regions[] = {
>  	{ &page_offset_base, 0 },
>  	{ &vmalloc_base, 0 },
> -	{ &vmemmap_base, 1 },
> +	{ &vmemmap_base, 0 },
>  };
>  
>  /* Get size in bytes used by the memory region */
> @@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void)
>  	unsigned long rand, memory_tb;
>  	struct rnd_state rand_state;
>  	unsigned long remain_entropy;
> +	unsigned long vmemmap_size;
>  
>  	vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
>  	vaddr = vaddr_start;
> @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
>  	if (memory_tb < kaslr_regions[0].size_tb)
>  		kaslr_regions[0].size_tb = memory_tb;
>  
> +	/**

Nit: that is weird style for inline comment.

> +	 * Calculate how many TB vmemmap region needs, and aligned to
> +	 * 1TB boundary.
> +	 */
> +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> +		sizeof(struct page);

Hm. Don't we need to take into account alignment requirements for struct
page here? I'm worried about some exotic debug kernel config where
sizeof(struct page) doesn't satify __alignof__(struct page).

> +	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> +
>  	/* Calculate entropy available between regions */
>  	remain_entropy = vaddr_end - vaddr_start;
>  	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-28 18:54     ` Kirill A. Shutemov
@ 2019-04-29  8:12       ` Baoquan He
  2019-04-29 13:16         ` Kirill A. Shutemov
  0 siblings, 1 reply; 18+ messages in thread
From: Baoquan He @ 2019-04-29  8:12 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, kirill.shutemov,
	keescook, peterz, thgarnie, herbert, mike.travis, frank.ramsay,
	yamada.masahiro

On 04/28/19 at 09:54pm, Kirill A. Shutemov wrote:
> > @@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void)
> >  	if (memory_tb < kaslr_regions[0].size_tb)
> >  		kaslr_regions[0].size_tb = memory_tb;
> >  
> > +	/**
> 
> Nit: that is weird style for inline comment.

Right, will fix.

Thanks a lot for reviewing.

> 
> > +	 * Calculate how many TB vmemmap region needs, and aligned to
> > +	 * 1TB boundary.
> > +	 */
> > +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> > +		sizeof(struct page);
> 
> Hm. Don't we need to take into account alignment requirements for struct
> page here? I'm worried about some exotic debug kernel config where
> sizeof(struct page) doesn't satify __alignof__(struct page).

I know sizeof(struct page) has handled its own struct alignment and
padding. About __alignof__(struct page), will it conflict with below
code to convert pfn < -- > page? Not sure if I got your point.

#elif defined(CONFIG_SPARSEMEM_VMEMMAP)

/* memmap is virtually contiguous.  */
#define __pfn_to_page(pfn)      (vmemmap + (pfn))
#define __page_to_pfn(page)     (unsigned long)((page) - vmemmap)

#elif...


> 
> > +	kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
> > +
> >  	/* Calculate entropy available between regions */
> >  	remain_entropy = vaddr_end - vaddr_start;
> >  	for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
> -- 
>  Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section
  2019-04-29  8:12       ` Baoquan He
@ 2019-04-29 13:16         ` Kirill A. Shutemov
  0 siblings, 0 replies; 18+ messages in thread
From: Kirill A. Shutemov @ 2019-04-29 13:16 UTC (permalink / raw)
  To: Baoquan He
  Cc: linux-kernel, x86, tglx, mingo, bp, hpa, kirill.shutemov,
	keescook, peterz, thgarnie, herbert, mike.travis, frank.ramsay,
	yamada.masahiro

On Mon, Apr 29, 2019 at 04:12:46PM +0800, Baoquan He wrote:
> > > +	 * Calculate how many TB vmemmap region needs, and aligned to
> > > +	 * 1TB boundary.
> > > +	 */
> > > +	vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
> > > +		sizeof(struct page);
> > 
> > Hm. Don't we need to take into account alignment requirements for struct
> > page here? I'm worried about some exotic debug kernel config where
> > sizeof(struct page) doesn't satify __alignof__(struct page).
> 
> I know sizeof(struct page) has handled its own struct alignment and
> padding.

I didn't realize that. Sorry for the noise.

Acked-by: Kirill A. Shutemov <kirill@linux.intel.com>

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-04-29 13:16 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-14  7:28 [PATCH v2 RESEND 0/2] x86/mm/KASLR: Fix the wrong size of memory sections Baoquan He
2019-04-14  7:28 ` [PATCH v2 RESEND 1/2] x86/mm/KASLR: Fix the size of the direct mapping section Baoquan He
2019-04-15 18:53   ` Borislav Petkov
2019-04-17  8:35     ` Baoquan He
2019-04-17 15:01       ` Borislav Petkov
2019-04-17 22:42         ` Baoquan He
2019-04-18  8:52       ` [tip:x86/urgent] " tip-bot for Baoquan He
2019-04-14  7:28 ` [PATCH v2 RESEND 2/2] x86/mm/KASLR: Fix the size of vmemmap section Baoquan He
2019-04-15 19:47   ` Borislav Petkov
2019-04-17  8:39     ` Baoquan He
2019-04-26  9:23     ` Baoquan He
2019-04-26 10:04       ` Borislav Petkov
2019-04-26 10:18         ` Baoquan He
2019-04-22  9:10   ` [PATCH v3 " Baoquan He
2019-04-22  9:14     ` Baoquan He
2019-04-28 18:54     ` Kirill A. Shutemov
2019-04-29  8:12       ` Baoquan He
2019-04-29 13:16         ` Kirill A. Shutemov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.