linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit
@ 2019-04-08 16:33 pierre Kuo
  2019-04-08 16:33 ` [PATCH v3 2/2] initrd: move initrd_start calculate within linear mapping range check pierre Kuo
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: pierre Kuo @ 2019-04-08 16:33 UTC (permalink / raw)
  To: will.deacon
  Cc: catalin.marinas, steven.price, f.fainelli, ard.biesheuvel,
	linux-kernel, linux-arm-kernel, vichy.kuo

The following is schematic diagram of the program before and after the
modification.

Before:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*

After:
if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)

After grouping modification of memstart_address by moving linear region
randomization ahead of memory_init, driver can safely using macro,
__phys_to_virt, in (b) or (c), if necessary.

Signed-off-by: pierre Kuo <vichy.kuo@gmail.com>
---
Changes in v2:
- add Fixes tag

Changes in v3:
- adding patch of shifting linear region randomization ahead of
 memory_limit

 arch/arm64/mm/init.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 7205a9085b4d..5142020fc146 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -389,6 +389,23 @@ void __init arm64_memblock_init(void)
 		memblock_remove(0, memstart_addr);
 	}
 
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+		extern u16 memstart_offset_seed;
+		u64 range = linear_region_size -
+			    (memblock_end_of_DRAM() - memblock_start_of_DRAM());
+
+		/*
+		 * If the size of the linear region exceeds, by a sufficient
+		 * margin, the size of the region that the available physical
+		 * memory spans, randomize the linear region as well.
+		 */
+		if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
+			range /= ARM64_MEMSTART_ALIGN;
+			memstart_addr -= ARM64_MEMSTART_ALIGN *
+					 ((range * memstart_offset_seed) >> 16);
+		}
+	}
+
 	/*
 	 * Apply the memory limit if it was set. Since the kernel may be loaded
 	 * high up in memory, add back the kernel region that must be accessible
@@ -428,22 +445,6 @@ void __init arm64_memblock_init(void)
 		}
 	}
 
-	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
-		extern u16 memstart_offset_seed;
-		u64 range = linear_region_size -
-			    (memblock_end_of_DRAM() - memblock_start_of_DRAM());
-
-		/*
-		 * If the size of the linear region exceeds, by a sufficient
-		 * margin, the size of the region that the available physical
-		 * memory spans, randomize the linear region as well.
-		 */
-		if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
-			range /= ARM64_MEMSTART_ALIGN;
-			memstart_addr -= ARM64_MEMSTART_ALIGN *
-					 ((range * memstart_offset_seed) >> 16);
-		}
-	}
 
 	/*
 	 * Register the kernel text, kernel data, initrd, and initial
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v3 2/2] initrd: move initrd_start calculate within linear mapping range check
  2019-04-08 16:33 [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre Kuo
@ 2019-04-08 16:33 ` pierre Kuo
  2019-04-16  5:26 ` [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre kuo
  2019-05-02 12:28 ` Ard Biesheuvel
  2 siblings, 0 replies; 5+ messages in thread
From: pierre Kuo @ 2019-04-08 16:33 UTC (permalink / raw)
  To: will.deacon
  Cc: catalin.marinas, steven.price, f.fainelli, ard.biesheuvel,
	linux-kernel, linux-arm-kernel, vichy.kuo

in the previous case, initrd_start and initrd_end can be successfully
returned either (base < memblock_start_of_DRAM()) or (base + size >
memblock_start_of_DRAM() + linear_region_size).

That means even linear mapping range check fail for initrd_start and
initrd_end, it still can get virtual address. Here we put
initrd_start/initrd_end to be calculated only when linear mapping check
pass.

Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: pierre Kuo <vichy.kuo@gmail.com>
---
Changes in v2:
- add Fixes tag

Changes in v3:
- adding patch of shifting linear region randomization ahead of
 memory_limit

 arch/arm64/mm/init.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 5142020fc146..566761da5719 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -442,6 +442,9 @@ void __init arm64_memblock_init(void)
 			memblock_remove(base, size); /* clear MEMBLOCK_ flags */
 			memblock_add(base, size);
 			memblock_reserve(base, size);
+			/* the generic initrd code expects virtual addresses */
+			initrd_start = __phys_to_virt(phys_initrd_start);
+			initrd_end = initrd_start + phys_initrd_size;
 		}
 	}
 
@@ -451,11 +454,6 @@ void __init arm64_memblock_init(void)
 	 * pagetables with memblock.
 	 */
 	memblock_reserve(__pa_symbol(_text), _end - _text);
-	if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
-		/* the generic initrd code expects virtual addresses */
-		initrd_start = __phys_to_virt(phys_initrd_start);
-		initrd_end = initrd_start + phys_initrd_size;
-	}
 
 	early_init_fdt_scan_reserved_mem();
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit
  2019-04-08 16:33 [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre Kuo
  2019-04-08 16:33 ` [PATCH v3 2/2] initrd: move initrd_start calculate within linear mapping range check pierre Kuo
@ 2019-04-16  5:26 ` pierre kuo
  2019-05-02 12:28 ` Ard Biesheuvel
  2 siblings, 0 replies; 5+ messages in thread
From: pierre kuo @ 2019-04-16  5:26 UTC (permalink / raw)
  To: Will Deacon
  Cc: Catalin Marinas, Steven Price, Florian Fainelli, Ard Biesheuvel,
	linux-kernel, linux-arm-kernel

hi will and all:
>
> The following is schematic diagram of the program before and after the
> modification.
>
> Before:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
>
> After:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
> if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
>
> After grouping modification of memstart_address by moving linear region
> randomization ahead of memory_init, driver can safely using macro,
> __phys_to_virt, in (b) or (c), if necessary.
>
> Signed-off-by: pierre Kuo <vichy.kuo@gmail.com>
> ---
> Changes in v2:
> - add Fixes tag
>
> Changes in v3:
> - adding patch of shifting linear region randomization ahead of
>  memory_limit
>
>  arch/arm64/mm/init.c | 33 +++++++++++++++++----------------
>  1 file changed, 17 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 7205a9085b4d..5142020fc146 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -389,6 +389,23 @@ void __init arm64_memblock_init(void)
>                 memblock_remove(0, memstart_addr);
>         }
>
> +       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> +               extern u16 memstart_offset_seed;
> +               u64 range = linear_region_size -
> +                           (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> +
> +               /*
> +                * If the size of the linear region exceeds, by a sufficient
> +                * margin, the size of the region that the available physical
> +                * memory spans, randomize the linear region as well.
> +                */
> +               if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> +                       range /= ARM64_MEMSTART_ALIGN;
> +                       memstart_addr -= ARM64_MEMSTART_ALIGN *
> +                                        ((range * memstart_offset_seed) >> 16);
> +               }
> +       }
> +
>         /*
>          * Apply the memory limit if it was set. Since the kernel may be loaded
>          * high up in memory, add back the kernel region that must be accessible
> @@ -428,22 +445,6 @@ void __init arm64_memblock_init(void)
>                 }
>         }
>
> -       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> -               extern u16 memstart_offset_seed;
> -               u64 range = linear_region_size -
> -                           (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> -
> -               /*
> -                * If the size of the linear region exceeds, by a sufficient
> -                * margin, the size of the region that the available physical
> -                * memory spans, randomize the linear region as well.
> -                */
> -               if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> -                       range /= ARM64_MEMSTART_ALIGN;
> -                       memstart_addr -= ARM64_MEMSTART_ALIGN *
> -                                        ((range * memstart_offset_seed) >> 16);
> -               }
> -       }
>
>         /*
>          * Register the kernel text, kernel data, initrd, and initial

Would you mind to give some comment and suggestion for these v3 patches?
https://lkml.org/lkml/2019/4/8/682
https://lkml.org/lkml/2019/4/8/683

Sincerely appreciate your kind help,

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit
  2019-04-08 16:33 [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre Kuo
  2019-04-08 16:33 ` [PATCH v3 2/2] initrd: move initrd_start calculate within linear mapping range check pierre Kuo
  2019-04-16  5:26 ` [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre kuo
@ 2019-05-02 12:28 ` Ard Biesheuvel
  2019-05-12 15:41   ` pierre kuo
  2 siblings, 1 reply; 5+ messages in thread
From: Ard Biesheuvel @ 2019-05-02 12:28 UTC (permalink / raw)
  To: pierre Kuo
  Cc: Will Deacon, Catalin Marinas, steven.price, Florian Fainelli,
	Linux Kernel Mailing List, linux-arm-kernel

On Mon, 8 Apr 2019 at 18:33, pierre Kuo <vichy.kuo@gmail.com> wrote:
>
> The following is schematic diagram of the program before and after the
> modification.
>
> Before:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
>
> After:
> if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
> if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
>
> After grouping modification of memstart_address by moving linear region
> randomization ahead of memory_init, driver can safely using macro,
> __phys_to_virt, in (b) or (c), if necessary.
>

Why is this an advantage?

> Signed-off-by: pierre Kuo <vichy.kuo@gmail.com>
> ---
> Changes in v2:
> - add Fixes tag
>
> Changes in v3:
> - adding patch of shifting linear region randomization ahead of
>  memory_limit
>
>  arch/arm64/mm/init.c | 33 +++++++++++++++++----------------
>  1 file changed, 17 insertions(+), 16 deletions(-)
>
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 7205a9085b4d..5142020fc146 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -389,6 +389,23 @@ void __init arm64_memblock_init(void)
>                 memblock_remove(0, memstart_addr);
>         }
>
> +       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> +               extern u16 memstart_offset_seed;
> +               u64 range = linear_region_size -
> +                           (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> +
> +               /*
> +                * If the size of the linear region exceeds, by a sufficient
> +                * margin, the size of the region that the available physical
> +                * memory spans, randomize the linear region as well.
> +                */
> +               if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> +                       range /= ARM64_MEMSTART_ALIGN;
> +                       memstart_addr -= ARM64_MEMSTART_ALIGN *
> +                                        ((range * memstart_offset_seed) >> 16);
> +               }
> +       }
> +
>         /*
>          * Apply the memory limit if it was set. Since the kernel may be loaded
>          * high up in memory, add back the kernel region that must be accessible
> @@ -428,22 +445,6 @@ void __init arm64_memblock_init(void)
>                 }
>         }
>
> -       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> -               extern u16 memstart_offset_seed;
> -               u64 range = linear_region_size -
> -                           (memblock_end_of_DRAM() - memblock_start_of_DRAM());
> -
> -               /*
> -                * If the size of the linear region exceeds, by a sufficient
> -                * margin, the size of the region that the available physical
> -                * memory spans, randomize the linear region as well.
> -                */
> -               if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
> -                       range /= ARM64_MEMSTART_ALIGN;
> -                       memstart_addr -= ARM64_MEMSTART_ALIGN *
> -                                        ((range * memstart_offset_seed) >> 16);
> -               }
> -       }
>
>         /*
>          * Register the kernel text, kernel data, initrd, and initial
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit
  2019-05-02 12:28 ` Ard Biesheuvel
@ 2019-05-12 15:41   ` pierre kuo
  0 siblings, 0 replies; 5+ messages in thread
From: pierre kuo @ 2019-05-12 15:41 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Will Deacon, Catalin Marinas, Steven Price, Florian Fainelli,
	Linux Kernel Mailing List, linux-arm-kernel

hi Ard:
> > The following is schematic diagram of the program before and after the
> > modification.
> >
> > Before:
> > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> > if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
> > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
> >
> > After:
> > if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {} --(a)
> > if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {}                           --(d)*
> > if (memory_limit != PHYS_ADDR_MAX) {}                               --(b)
> > if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {}       --(c)
> >
> > After grouping modification of memstart_address by moving linear region
> > randomization ahead of memory_init, driver can safely using macro,
> > __phys_to_virt, in (b) or (c), if necessary.
> >
>
> Why is this an advantage?

1st, by putting (d) right behind (a),  driver can safely uses macro,
__phys_to_virt that depends on memstart_address, in (b) and (c) if necessary.
That means:
(a)
(d)
----- finish modification of memstart_address --------
(b) --> can safely use __phys_to_virt
(c) --> can safely use __phys_to_virt

2nd, it can make current driver more concise.
Letting (d) right behind (a), as below v3 patch shows,
https://lkml.org/lkml/2019/4/8/683
it can put initrd_start/initrd_end to be calculated only when linear
mapping check
pass and concise the driver by eliminating
"if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) " as patch shows.

Thanks for your message.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-05-12 15:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-08 16:33 [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre Kuo
2019-04-08 16:33 ` [PATCH v3 2/2] initrd: move initrd_start calculate within linear mapping range check pierre Kuo
2019-04-16  5:26 ` [PATCH v3 1/2] kaslr: shift linear region randomization ahead of memory_limit pierre kuo
2019-05-02 12:28 ` Ard Biesheuvel
2019-05-12 15:41   ` pierre kuo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).