All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement
@ 2017-08-18 17:42 Ard Biesheuvel
  2017-08-20 12:26 ` Catalin Marinas
  0 siblings, 1 reply; 5+ messages in thread
From: Ard Biesheuvel @ 2017-08-18 17:42 UTC (permalink / raw)
  To: linux-arm-kernel

In the KASLR setup routine, we ensure that the early virtual mapping
of the kernel image does not cover more than a single table entry at
the level above the swapper block level, so that the assembler routines
involved in setting up this mapping can remain simple.

In this calculation we add the proposed KASLR offset to the values of
the _text and _end markers, and reject it if they would end up falling
in different swapper table sized windows.

However, when taking the addresses of _text and _end, the modulo offset
(the physical displacement modulo 2 MB) is already accounted for, and
so adding it again results in incorrect results. So disregard the modulo
offset from the calculation.

Fixes: 08cdac619c81 ("arm64: relocatable: deal with physically misaligned ...")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---

This still leaves the 16K pages issue, but I think this solves the problem
encountered by Mark.

 arch/arm64/kernel/head.S  |  1 -
 arch/arm64/kernel/kaslr.c | 12 +++++++++---
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index d3015172c136..7434ec0c7a27 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -354,7 +354,6 @@ __primary_switched:
 	tst	x23, ~(MIN_KIMG_ALIGN - 1)	// already running randomized?
 	b.ne	0f
 	mov	x0, x21				// pass FDT address in x0
-	mov	x1, x23				// pass modulo offset in x1
 	bl	kaslr_early_init		// parse FDT for KASLR options
 	cbz	x0, 0f				// KASLR disabled? just proceed
 	orr	x23, x23, x0			// record KASLR offset
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index a9710efb8c01..1d95c204186b 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -75,7 +75,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size,
  * containing function pointers) to be reinitialized, and zero-initialized
  * .bss variables will be reset to 0.
  */
-u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
+u64 __init kaslr_early_init(u64 dt_phys)
 {
 	void *fdt;
 	u64 seed, offset, mask, module_range;
@@ -133,9 +133,15 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
 	 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
 	 * happens, increase the KASLR offset by the size of the kernel image
 	 * rounded up by SWAPPER_BLOCK_SIZE.
+	 *
+	 * NOTE: The references to _text and _end below will already take the
+	 *       modulo offset (the physical displacement modulo 2 MB) into
+	 *       account, given that the physical placement is controlled by
+	 *       the loader, and will not change as a result of the virtual
+	 *       mapping we choose.
 	 */
-	if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
-	    (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) {
+	if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
+	    (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
 		u64 kimg_sz = _end - _text;
 		offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
 				& mask;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement
  2017-08-18 17:42 [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement Ard Biesheuvel
@ 2017-08-20 12:26 ` Catalin Marinas
  2017-08-20 18:43   ` Ard Biesheuvel
  0 siblings, 1 reply; 5+ messages in thread
From: Catalin Marinas @ 2017-08-20 12:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 18, 2017 at 06:42:30PM +0100, Ard Biesheuvel wrote:
> In the KASLR setup routine, we ensure that the early virtual mapping
> of the kernel image does not cover more than a single table entry at
> the level above the swapper block level, so that the assembler routines
> involved in setting up this mapping can remain simple.
> 
> In this calculation we add the proposed KASLR offset to the values of
> the _text and _end markers, and reject it if they would end up falling
> in different swapper table sized windows.
> 
> However, when taking the addresses of _text and _end, the modulo offset
> (the physical displacement modulo 2 MB) is already accounted for, and
> so adding it again results in incorrect results. So disregard the modulo
> offset from the calculation.
> 
> Fixes: 08cdac619c81 ("arm64: relocatable: deal with physically misaligned ...")
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
> 
> This still leaves the 16K pages issue, but I think this solves the problem
> encountered by Mark.

Thanks. It indeed seems to solve this aspect, at least in my tests but
I'll let Mark confirm with his known to fail seeds. For this patch:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>

For the 16K pages, we still need my patch on top of yours, slightly
changed to drop module_offset (see below). With both these patches, my
continuous reboot seems to be fine but I'll let it running overnight.

------8<-----------------------
>From fead5f2937b3af7e1054c1de74b7e5fe5964ac02 Mon Sep 17 00:00:00 2001
From: Catalin Marinas <catalin.marinas@arm.com>
Date: Fri, 18 Aug 2017 15:39:24 +0100
Subject: [PATCH] arm64: kaslr: Adjust the offset to avoid Image across
 alignment boundary

With 16KB pages and a kernel Image larger than 16MB, the current
kaslr_early_init() logic for avoiding mappings across swapper table
boundaries fails since increasing the offset by kimg_sz just moves the
problem to the next boundary.

This patch decreases the offset by the boundary overflow amount, with
slight risk of reduced entropy as the kernel is more likely to be found
at kimg_sz below a swapper table boundary.

Fixes: afd0e5a87670 ("arm64: kaslr: Fix up the kernel image alignment")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/kernel/kaslr.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1d95c204186b..b5fceb7efff5 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -131,8 +131,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * The kernel Image should not extend across a 1GB/32MB/512MB alignment
 	 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
-	 * happens, increase the KASLR offset by the size of the kernel image
-	 * rounded up by SWAPPER_BLOCK_SIZE.
+	 * happens, decrease the KASLR offset by the boundary overflow rounded
+	 * up to SWAPPER_BLOCK_SIZE.
 	 *
 	 * NOTE: The references to _text and _end below will already take the
 	 *       modulo offset (the physical displacement modulo 2 MB) into
@@ -142,8 +142,9 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	 */
 	if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
 	    (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
-		u64 kimg_sz = _end - _text;
-		offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
+		u64 adjust = ((u64)_end + offset) &
+			((1 << SWAPPER_TABLE_SHIFT) - 1);
+		offset = (offset - round_up(adjust, SWAPPER_BLOCK_SIZE))
 				& mask;
 	}
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement
  2017-08-20 12:26 ` Catalin Marinas
@ 2017-08-20 18:43   ` Ard Biesheuvel
  2017-08-21 10:05     ` Catalin Marinas
  2017-08-22 14:39     ` [PATCH v3] arm64: kaslr: Adjust the offset to avoid Image across alignment boundary Catalin Marinas
  0 siblings, 2 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2017-08-20 18:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 20 August 2017 at 13:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Aug 18, 2017 at 06:42:30PM +0100, Ard Biesheuvel wrote:
>> In the KASLR setup routine, we ensure that the early virtual mapping
>> of the kernel image does not cover more than a single table entry at
>> the level above the swapper block level, so that the assembler routines
>> involved in setting up this mapping can remain simple.
>>
>> In this calculation we add the proposed KASLR offset to the values of
>> the _text and _end markers, and reject it if they would end up falling
>> in different swapper table sized windows.
>>
>> However, when taking the addresses of _text and _end, the modulo offset
>> (the physical displacement modulo 2 MB) is already accounted for, and
>> so adding it again results in incorrect results. So disregard the modulo
>> offset from the calculation.
>>
>> Fixes: 08cdac619c81 ("arm64: relocatable: deal with physically misaligned ...")
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>
>> This still leaves the 16K pages issue, but I think this solves the problem
>> encountered by Mark.
>
> Thanks. It indeed seems to solve this aspect, at least in my tests but
> I'll let Mark confirm with his known to fail seeds. For this patch:
>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> Tested-by: Catalin Marinas <catalin.marinas@arm.com>
>
> For the 16K pages, we still need my patch on top of yours, slightly
> changed to drop module_offset (see below). With both these patches, my
> continuous reboot seems to be fine but I'll let it running overnight.
>
> ------8<-----------------------
> From fead5f2937b3af7e1054c1de74b7e5fe5964ac02 Mon Sep 17 00:00:00 2001
> From: Catalin Marinas <catalin.marinas@arm.com>
> Date: Fri, 18 Aug 2017 15:39:24 +0100
> Subject: [PATCH] arm64: kaslr: Adjust the offset to avoid Image across
>  alignment boundary
>
> With 16KB pages and a kernel Image larger than 16MB, the current
> kaslr_early_init() logic for avoiding mappings across swapper table
> boundaries fails since increasing the offset by kimg_sz just moves the
> problem to the next boundary.
>
> This patch decreases the offset by the boundary overflow amount, with
> slight risk of reduced entropy as the kernel is more likely to be found
> at kimg_sz below a swapper table boundary.
>
> Fixes: afd0e5a87670 ("arm64: kaslr: Fix up the kernel image alignment")
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  arch/arm64/kernel/kaslr.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 1d95c204186b..b5fceb7efff5 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -131,8 +131,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
>         /*
>          * The kernel Image should not extend across a 1GB/32MB/512MB alignment
>          * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
> -        * happens, increase the KASLR offset by the size of the kernel image
> -        * rounded up by SWAPPER_BLOCK_SIZE.
> +        * happens, decrease the KASLR offset by the boundary overflow rounded
> +        * up to SWAPPER_BLOCK_SIZE.
>          *
>          * NOTE: The references to _text and _end below will already take the
>          *       modulo offset (the physical displacement modulo 2 MB) into
> @@ -142,8 +142,9 @@ u64 __init kaslr_early_init(u64 dt_phys)
>          */
>         if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
>             (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
> -               u64 kimg_sz = _end - _text;
> -               offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
> +               u64 adjust = ((u64)_end + offset) &
> +                       ((1 << SWAPPER_TABLE_SHIFT) - 1);
> +               offset = (offset - round_up(adjust, SWAPPER_BLOCK_SIZE))
>                                 & mask;
>         }
>

At this point, _text is in the range [PAGE_OFFSET .. PAGE_OFFSET +
2MB), so we can simply round up offset instead, I think.

offset = round_up(offset, 1 << SWAPPER_TABLE_SHIFT);

That way we add rather than subtract but this should not be a problem
(we don't randomize over the entire VMALLOC region anyway)

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement
  2017-08-20 18:43   ` Ard Biesheuvel
@ 2017-08-21 10:05     ` Catalin Marinas
  2017-08-22 14:39     ` [PATCH v3] arm64: kaslr: Adjust the offset to avoid Image across alignment boundary Catalin Marinas
  1 sibling, 0 replies; 5+ messages in thread
From: Catalin Marinas @ 2017-08-21 10:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Aug 20, 2017 at 07:43:05PM +0100, Ard Biesheuvel wrote:
> On 20 August 2017 at 13:26, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> > index 1d95c204186b..b5fceb7efff5 100644
> > --- a/arch/arm64/kernel/kaslr.c
> > +++ b/arch/arm64/kernel/kaslr.c
> > @@ -131,8 +131,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
> >         /*
> >          * The kernel Image should not extend across a 1GB/32MB/512MB alignment
> >          * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
> > -        * happens, increase the KASLR offset by the size of the kernel image
> > -        * rounded up by SWAPPER_BLOCK_SIZE.
> > +        * happens, decrease the KASLR offset by the boundary overflow rounded
> > +        * up to SWAPPER_BLOCK_SIZE.
> >          *
> >          * NOTE: The references to _text and _end below will already take the
> >          *       modulo offset (the physical displacement modulo 2 MB) into
> > @@ -142,8 +142,9 @@ u64 __init kaslr_early_init(u64 dt_phys)
> >          */
> >         if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
> >             (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
> > -               u64 kimg_sz = _end - _text;
> > -               offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
> > +               u64 adjust = ((u64)_end + offset) &
> > +                       ((1 << SWAPPER_TABLE_SHIFT) - 1);
> > +               offset = (offset - round_up(adjust, SWAPPER_BLOCK_SIZE))
> >                                 & mask;
> >         }
> >
> 
> At this point, _text is in the range [PAGE_OFFSET .. PAGE_OFFSET +
> 2MB), so we can simply round up offset instead, I think.
> 
> offset = round_up(offset, 1 << SWAPPER_TABLE_SHIFT);
> 
> That way we add rather than subtract but this should not be a problem
> (we don't randomize over the entire VMALLOC region anyway)

This would work as well, with a similar loss of randomness (I don't
think it matters whether _text or _end is more aligned with
1 << SWAPPER_TABLE_SHIFT).

-- 
Catalin

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v3] arm64: kaslr: Adjust the offset to avoid Image across alignment boundary
  2017-08-20 18:43   ` Ard Biesheuvel
  2017-08-21 10:05     ` Catalin Marinas
@ 2017-08-22 14:39     ` Catalin Marinas
  1 sibling, 0 replies; 5+ messages in thread
From: Catalin Marinas @ 2017-08-22 14:39 UTC (permalink / raw)
  To: linux-arm-kernel

With 16KB pages and a kernel Image larger than 16MB, the current
kaslr_early_init() logic for avoiding mappings across swapper table
boundaries fails since increasing the offset by kimg_sz just moves the
problem to the next boundary.

This patch rounds the offset down to (1 << SWAPPER_TABLE_SHIFT) if the
Image crosses a PMD_SIZE boundary.

Fixes: afd0e5a87670 ("arm64: kaslr: Fix up the kernel image alignment")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---

Changes since v2:

Simplified the offset adjustment by just rounding it down to (1 <<
SWAPPER_TABLE_SHIFT). Tested together with Ard's patch:

http://lkml.kernel.org/r/20170818174230.30435-1-ard.biesheuvel@linaro.org

 arch/arm64/kernel/kaslr.c |   10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1d95c204186b..47080c49cc7e 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -131,8 +131,7 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * The kernel Image should not extend across a 1GB/32MB/512MB alignment
 	 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
-	 * happens, increase the KASLR offset by the size of the kernel image
-	 * rounded up by SWAPPER_BLOCK_SIZE.
+	 * happens, round down the KASLR offset by (1 << SWAPPER_TABLE_SHIFT).
 	 *
 	 * NOTE: The references to _text and _end below will already take the
 	 *       modulo offset (the physical displacement modulo 2 MB) into
@@ -141,11 +140,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	 *       mapping we choose.
 	 */
 	if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
-	    (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT)) {
-		u64 kimg_sz = _end - _text;
-		offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
-				& mask;
-	}
+	    (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
+		offset = round_down(offset, 1 << SWAPPER_TABLE_SHIFT);
 
 	if (IS_ENABLED(CONFIG_KASAN))
 		/*

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-08-22 14:39 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-18 17:42 [PATCH] arm64: kaslr: ignore modulo offset when validating virtual displacement Ard Biesheuvel
2017-08-20 12:26 ` Catalin Marinas
2017-08-20 18:43   ` Ard Biesheuvel
2017-08-21 10:05     ` Catalin Marinas
2017-08-22 14:39     ` [PATCH v3] arm64: kaslr: Adjust the offset to avoid Image across alignment boundary Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.