linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: kaslr: Add 2MB correction for aligning kernel image
@ 2017-03-22  8:55 Srinivas Ramana
  2017-03-22  9:27 ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Srinivas Ramana @ 2017-03-22  8:55 UTC (permalink / raw)
  To: catalin.marinas, will.deacon, ard.biesheuvel
  Cc: linux-arm-kernel, linux-kernel, linux-arm-msm, Neeraj Upadhyay,
	Srinivas Ramana

From: Neeraj Upadhyay <neeraju@codeaurora.org>

If kernel image extends across alignment boundary, existing
code increases the KASLR offset by size of kernel image. The
offset is masked after resizing. There are cases, where after
masking, we may still have kernel image extending across
boundary. This eventually results in only 2MB block getting
mapped while creating the page tables. This results in data aborts
while accessing unmapped regions during second relocation (with
kaslr offset) in __primary_switch. To fix this problem, add a
2MB correction to offset along with the correction of kernel
image size, before applying mask.

For example consider below case, where kernel image still crosses
1GB alignment boundary, after masking the offset, which is fixed
by adding 2MB correction.

SWAPPER_TABLE_SHIFT = 30
Swapper using section maps with section size 2MB.
CONFIG_PGTABLE_LEVELS = 3
VA_BITS = 39

_text  : 0xffffff8008080000
_end   : 0xffffff800aa1b000
offset : 0x1f35600000
mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)

(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

offset after existing correction (before mask) = 0x1f37f9b000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

offset (after mask) = 0x1f37e00000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

new offset w/ 2MB correction (before mask) = 0x1f37819b00
new offset w/ 2MB correction (after mask) = 0x1f38000000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
---
 arch/arm64/kernel/kaslr.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 769f24ef628c..7b8af985e497 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -135,7 +135,7 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
 	 */
 	if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
 	    (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
-		offset = (offset + (u64)(_end - _text)) & mask;
+		offset = (offset + (u64)(_end - _text) + SZ_2M) & mask;
 
 	if (IS_ENABLED(CONFIG_KASAN))
 		/*
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] arm64: kaslr: Add 2MB correction for aligning kernel image
  2017-03-22  8:55 [PATCH] arm64: kaslr: Add 2MB correction for aligning kernel image Srinivas Ramana
@ 2017-03-22  9:27 ` Ard Biesheuvel
  2017-03-22 11:38   ` [PATCH v2] arm64: kaslr: Fix up the kernel image alignment Srinivas Ramana
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2017-03-22  9:27 UTC (permalink / raw)
  To: Srinivas Ramana
  Cc: catalin.marinas, will.deacon, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay


> On 22 Mar 2017, at 08:55, Srinivas Ramana <sramana@codeaurora.org> wrote:
> 
> From: Neeraj Upadhyay <neeraju@codeaurora.org>
> 
> If kernel image extends across alignment boundary, existing
> code increases the KASLR offset by size of kernel image. The
> offset is masked after resizing. There are cases, where after
> masking, we may still have kernel image extending across
> boundary. This eventually results in only 2MB block getting
> mapped while creating the page tables. This results in data aborts
> while accessing unmapped regions during second relocation (with
> kaslr offset) in __primary_switch. To fix this problem, add a
> 2MB correction to offset along with the correction of kernel
> image size, before applying mask.
> 
> For example consider below case, where kernel image still crosses
> 1GB alignment boundary, after masking the offset, which is fixed
> by adding 2MB correction.
> 
> SWAPPER_TABLE_SHIFT = 30
> Swapper using section maps with section size 2MB.
> CONFIG_PGTABLE_LEVELS = 3
> VA_BITS = 39
> 
> _text  : 0xffffff8008080000
> _end   : 0xffffff800aa1b000
> offset : 0x1f35600000
> mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
> 
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> 
> offset after existing correction (before mask) = 0x1f37f9b000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> 
> offset (after mask) = 0x1f37e00000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> 
> new offset w/ 2MB correction (before mask) = 0x1f37819b00
> new offset w/ 2MB correction (after mask) = 0x1f38000000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> 
> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
> ---
> arch/arm64/kernel/kaslr.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 769f24ef628c..7b8af985e497 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -135,7 +135,7 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
>     */
>    if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
>        (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
> -        offset = (offset + (u64)(_end - _text)) & mask;
> +        offset = (offset + (u64)(_end - _text) + SZ_2M) & mask;
> 
>    if (IS_ENABLED(CONFIG_KASAN))
>        /*


Hi,

Thanks for spotting this!

Instead of adding 2 MB, could we round up _end  - _text to a SWAPPER_BLOCK_SIZE multiple instead?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-22  9:27 ` Ard Biesheuvel
@ 2017-03-22 11:38   ` Srinivas Ramana
  2017-03-22 12:16     ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Srinivas Ramana @ 2017-03-22 11:38 UTC (permalink / raw)
  To: catalin.marinas, will.deacon, ard.biesheuvel
  Cc: linux-arm-kernel, linux-kernel, linux-arm-msm, Neeraj Upadhyay,
	Srinivas Ramana

From: Neeraj Upadhyay <neeraju@codeaurora.org>

If kernel image extends across alignment boundary, existing
code increases the KASLR offset by size of kernel image. The
offset is masked after resizing. There are cases, where after
masking, we may still have kernel image extending across
boundary. This eventually results in only 2MB block getting
mapped while creating the page tables. This results in data aborts
while accessing unmapped regions during second relocation (with
kaslr offset) in __primary_switch. To fix this problem, round up the
kernel image size, by swapper block size, before adding it for
correction.

For example consider below case, where kernel image still crosses
1GB alignment boundary, after masking the offset, which is fixed
by rounding up kernel image size.

SWAPPER_TABLE_SHIFT = 30
Swapper using section maps with section size 2MB.
CONFIG_PGTABLE_LEVELS = 3
VA_BITS = 39

_text  : 0xffffff8008080000
_end   : 0xffffff800aa1b000
offset : 0x1f35600000
mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)

(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

offset after existing correction (before mask) = 0x1f37f9b000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

offset (after mask) = 0x1f37e00000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

new offset w/ rounding up = 0x1f38000000
(_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
(_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
---
 arch/arm64/kernel/kaslr.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 769f24ef628c..d7e90d97f5c4 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -131,11 +131,15 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
 	/*
 	 * The kernel Image should not extend across a 1GB/32MB/512MB alignment
 	 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
-	 * happens, increase the KASLR offset by the size of the kernel image.
+	 * happens, increase the KASLR offset by the size of the kernel image
+	 * rounded up by SWAPPER_BLOCK_SIZE.
 	 */
 	if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
-	    (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
-		offset = (offset + (u64)(_end - _text)) & mask;
+	    (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) {
+		u64 kimg_sz = _end - _text;
+		offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
+				& mask;
+	}
 
 	if (IS_ENABLED(CONFIG_KASAN))
 		/*
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-22 11:38   ` [PATCH v2] arm64: kaslr: Fix up the kernel image alignment Srinivas Ramana
@ 2017-03-22 12:16     ` Ard Biesheuvel
  2017-03-22 12:40       ` Will Deacon
  0 siblings, 1 reply; 8+ messages in thread
From: Ard Biesheuvel @ 2017-03-22 12:16 UTC (permalink / raw)
  To: Srinivas Ramana
  Cc: Catalin Marinas, Will Deacon, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay

On 22 March 2017 at 11:38, Srinivas Ramana <sramana@codeaurora.org> wrote:
> From: Neeraj Upadhyay <neeraju@codeaurora.org>
>
> If kernel image extends across alignment boundary, existing
> code increases the KASLR offset by size of kernel image. The
> offset is masked after resizing. There are cases, where after
> masking, we may still have kernel image extending across
> boundary. This eventually results in only 2MB block getting
> mapped while creating the page tables. This results in data aborts
> while accessing unmapped regions during second relocation (with
> kaslr offset) in __primary_switch. To fix this problem, round up the
> kernel image size, by swapper block size, before adding it for
> correction.
>
> For example consider below case, where kernel image still crosses
> 1GB alignment boundary, after masking the offset, which is fixed
> by rounding up kernel image size.
>
> SWAPPER_TABLE_SHIFT = 30
> Swapper using section maps with section size 2MB.
> CONFIG_PGTABLE_LEVELS = 3
> VA_BITS = 39
>
> _text  : 0xffffff8008080000
> _end   : 0xffffff800aa1b000
> offset : 0x1f35600000
> mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
>
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>
> offset after existing correction (before mask) = 0x1f37f9b000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>
> offset (after mask) = 0x1f37e00000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>
> new offset w/ rounding up = 0x1f38000000
> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>
> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

... and thanks for the excellent commit log message!

> ---
>  arch/arm64/kernel/kaslr.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
> index 769f24ef628c..d7e90d97f5c4 100644
> --- a/arch/arm64/kernel/kaslr.c
> +++ b/arch/arm64/kernel/kaslr.c
> @@ -131,11 +131,15 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
>         /*
>          * The kernel Image should not extend across a 1GB/32MB/512MB alignment
>          * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
> -        * happens, increase the KASLR offset by the size of the kernel image.
> +        * happens, increase the KASLR offset by the size of the kernel image
> +        * rounded up by SWAPPER_BLOCK_SIZE.
>          */
>         if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
> -           (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
> -               offset = (offset + (u64)(_end - _text)) & mask;
> +           (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) {
> +               u64 kimg_sz = _end - _text;
> +               offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
> +                               & mask;
> +       }
>
>         if (IS_ENABLED(CONFIG_KASAN))
>                 /*
> --
> Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc.,
> is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-22 12:16     ` Ard Biesheuvel
@ 2017-03-22 12:40       ` Will Deacon
  2017-03-22 13:45         ` Srinivas Ramana
  0 siblings, 1 reply; 8+ messages in thread
From: Will Deacon @ 2017-03-22 12:40 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Srinivas Ramana, Catalin Marinas, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay

On Wed, Mar 22, 2017 at 12:16:24PM +0000, Ard Biesheuvel wrote:
> On 22 March 2017 at 11:38, Srinivas Ramana <sramana@codeaurora.org> wrote:
> > From: Neeraj Upadhyay <neeraju@codeaurora.org>
> >
> > If kernel image extends across alignment boundary, existing
> > code increases the KASLR offset by size of kernel image. The
> > offset is masked after resizing. There are cases, where after
> > masking, we may still have kernel image extending across
> > boundary. This eventually results in only 2MB block getting
> > mapped while creating the page tables. This results in data aborts
> > while accessing unmapped regions during second relocation (with
> > kaslr offset) in __primary_switch. To fix this problem, round up the
> > kernel image size, by swapper block size, before adding it for
> > correction.
> >
> > For example consider below case, where kernel image still crosses
> > 1GB alignment boundary, after masking the offset, which is fixed
> > by rounding up kernel image size.
> >
> > SWAPPER_TABLE_SHIFT = 30
> > Swapper using section maps with section size 2MB.
> > CONFIG_PGTABLE_LEVELS = 3
> > VA_BITS = 39
> >
> > _text  : 0xffffff8008080000
> > _end   : 0xffffff800aa1b000
> > offset : 0x1f35600000
> > mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
> >
> > (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> > (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> >
> > offset after existing correction (before mask) = 0x1f37f9b000
> > (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> > (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> >
> > offset (after mask) = 0x1f37e00000
> > (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
> > (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> >
> > new offset w/ rounding up = 0x1f38000000
> > (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
> > (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
> >
> > Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
> > Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
> > Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
> 
> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> 
> ... and thanks for the excellent commit log message!

Thanks both. I've picked this up as a fix.

Will

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-22 12:40       ` Will Deacon
@ 2017-03-22 13:45         ` Srinivas Ramana
  2017-03-23  9:32           ` Srinivas Ramana
  0 siblings, 1 reply; 8+ messages in thread
From: Srinivas Ramana @ 2017-03-22 13:45 UTC (permalink / raw)
  To: Will Deacon
  Cc: Ard Biesheuvel, Catalin Marinas, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay

On 03/22/2017 06:10 PM, Will Deacon wrote:
> On Wed, Mar 22, 2017 at 12:16:24PM +0000, Ard Biesheuvel wrote:
>> On 22 March 2017 at 11:38, Srinivas Ramana <sramana@codeaurora.org> wrote:
>>> From: Neeraj Upadhyay <neeraju@codeaurora.org>
>>>
>>> If kernel image extends across alignment boundary, existing
>>> code increases the KASLR offset by size of kernel image. The
>>> offset is masked after resizing. There are cases, where after
>>> masking, we may still have kernel image extending across
>>> boundary. This eventually results in only 2MB block getting
>>> mapped while creating the page tables. This results in data aborts
>>> while accessing unmapped regions during second relocation (with
>>> kaslr offset) in __primary_switch. To fix this problem, round up the
>>> kernel image size, by swapper block size, before adding it for
>>> correction.
>>>
>>> For example consider below case, where kernel image still crosses
>>> 1GB alignment boundary, after masking the offset, which is fixed
>>> by rounding up kernel image size.
>>>
>>> SWAPPER_TABLE_SHIFT = 30
>>> Swapper using section maps with section size 2MB.
>>> CONFIG_PGTABLE_LEVELS = 3
>>> VA_BITS = 39
>>>
>>> _text  : 0xffffff8008080000
>>> _end   : 0xffffff800aa1b000
>>> offset : 0x1f35600000
>>> mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
>>>
>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>
>>> offset after existing correction (before mask) = 0x1f37f9b000
>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>
>>> offset (after mask) = 0x1f37e00000
>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>
>>> new offset w/ rounding up = 0x1f38000000
>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>
>>> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
>>> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
>>> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
>>
>> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>
>> ... and thanks for the excellent commit log message!
>
> Thanks both. I've picked this up as a fix.
>
> Will
>

Thanks Ard and Will for the review and picking this patch.
can we also CC: <stable@vger.kernel.org> ?

Thanks,
-- Srinivas R


-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, 
Inc.,
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-22 13:45         ` Srinivas Ramana
@ 2017-03-23  9:32           ` Srinivas Ramana
  2017-03-23  9:34             ` Ard Biesheuvel
  0 siblings, 1 reply; 8+ messages in thread
From: Srinivas Ramana @ 2017-03-23  9:32 UTC (permalink / raw)
  To: Will Deacon
  Cc: Ard Biesheuvel, Catalin Marinas, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay

On 03/22/2017 07:15 PM, Srinivas Ramana wrote:
> On 03/22/2017 06:10 PM, Will Deacon wrote:
>> On Wed, Mar 22, 2017 at 12:16:24PM +0000, Ard Biesheuvel wrote:
>>> On 22 March 2017 at 11:38, Srinivas Ramana <sramana@codeaurora.org>
>>> wrote:
>>>> From: Neeraj Upadhyay <neeraju@codeaurora.org>
>>>>
>>>> If kernel image extends across alignment boundary, existing
>>>> code increases the KASLR offset by size of kernel image. The
>>>> offset is masked after resizing. There are cases, where after
>>>> masking, we may still have kernel image extending across
>>>> boundary. This eventually results in only 2MB block getting
>>>> mapped while creating the page tables. This results in data aborts
>>>> while accessing unmapped regions during second relocation (with
>>>> kaslr offset) in __primary_switch. To fix this problem, round up the
>>>> kernel image size, by swapper block size, before adding it for
>>>> correction.
>>>>
>>>> For example consider below case, where kernel image still crosses
>>>> 1GB alignment boundary, after masking the offset, which is fixed
>>>> by rounding up kernel image size.
>>>>
>>>> SWAPPER_TABLE_SHIFT = 30
>>>> Swapper using section maps with section size 2MB.
>>>> CONFIG_PGTABLE_LEVELS = 3
>>>> VA_BITS = 39
>>>>
>>>> _text  : 0xffffff8008080000
>>>> _end   : 0xffffff800aa1b000
>>>> offset : 0x1f35600000
>>>> mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
>>>>
>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>
>>>> offset after existing correction (before mask) = 0x1f37f9b000
>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>
>>>> offset (after mask) = 0x1f37e00000
>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>
>>>> new offset w/ rounding up = 0x1f38000000
>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>
>>>> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
>>>> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
>>>> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
>>>
>>> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>>
>>> ... and thanks for the excellent commit log message!
>>
>> Thanks both. I've picked this up as a fix.
>>
>> Will
>>
>
> Thanks Ard and Will for the review and picking this patch.
> can we also CC: <stable@vger.kernel.org> ?
>
> Thanks,
> -- Srinivas R
>
>

Sorry, there is a checkpatch error in the last patch. I will submit v3
after fixing the checkpatch error.

Thanks,
-- Srinivas R

-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, 
Inc.,
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2] arm64: kaslr: Fix up the kernel image alignment
  2017-03-23  9:32           ` Srinivas Ramana
@ 2017-03-23  9:34             ` Ard Biesheuvel
  0 siblings, 0 replies; 8+ messages in thread
From: Ard Biesheuvel @ 2017-03-23  9:34 UTC (permalink / raw)
  To: Srinivas Ramana
  Cc: Will Deacon, Catalin Marinas, linux-arm-kernel, linux-kernel,
	linux-arm-msm, Neeraj Upadhyay

On 23 March 2017 at 09:32, Srinivas Ramana <sramana@codeaurora.org> wrote:
> On 03/22/2017 07:15 PM, Srinivas Ramana wrote:
>>
>> On 03/22/2017 06:10 PM, Will Deacon wrote:
>>>
>>> On Wed, Mar 22, 2017 at 12:16:24PM +0000, Ard Biesheuvel wrote:
>>>>
>>>> On 22 March 2017 at 11:38, Srinivas Ramana <sramana@codeaurora.org>
>>>> wrote:
>>>>>
>>>>> From: Neeraj Upadhyay <neeraju@codeaurora.org>
>>>>>
>>>>> If kernel image extends across alignment boundary, existing
>>>>> code increases the KASLR offset by size of kernel image. The
>>>>> offset is masked after resizing. There are cases, where after
>>>>> masking, we may still have kernel image extending across
>>>>> boundary. This eventually results in only 2MB block getting
>>>>> mapped while creating the page tables. This results in data aborts
>>>>> while accessing unmapped regions during second relocation (with
>>>>> kaslr offset) in __primary_switch. To fix this problem, round up the
>>>>> kernel image size, by swapper block size, before adding it for
>>>>> correction.
>>>>>
>>>>> For example consider below case, where kernel image still crosses
>>>>> 1GB alignment boundary, after masking the offset, which is fixed
>>>>> by rounding up kernel image size.
>>>>>
>>>>> SWAPPER_TABLE_SHIFT = 30
>>>>> Swapper using section maps with section size 2MB.
>>>>> CONFIG_PGTABLE_LEVELS = 3
>>>>> VA_BITS = 39
>>>>>
>>>>> _text  : 0xffffff8008080000
>>>>> _end   : 0xffffff800aa1b000
>>>>> offset : 0x1f35600000
>>>>> mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1)
>>>>>
>>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>>
>>>>> offset after existing correction (before mask) = 0x1f37f9b000
>>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>>
>>>>> offset (after mask) = 0x1f37e00000
>>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7c
>>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>>
>>>>> new offset w/ rounding up = 0x1f38000000
>>>>> (_text + offset) >> SWAPPER_TABLE_SHIFT = 0x3fffffe7d
>>>>> (_end + offset) >> SWAPPER_TABLE_SHIFT  = 0x3fffffe7d
>>>>>
>>>>> Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
>>>>> Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
>>>>> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
>>>>
>>>>
>>>> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>>>>
>>>> ... and thanks for the excellent commit log message!
>>>
>>>
>>> Thanks both. I've picked this up as a fix.
>>>
>>> Will
>>>
>>
>> Thanks Ard and Will for the review and picking this patch.
>> can we also CC: <stable@vger.kernel.org> ?
>>
>> Thanks,
>> -- Srinivas R
>>
>>
>
> Sorry, there is a checkpatch error in the last patch. I will submit v3
> after fixing the checkpatch error.
>

I wouldn't worry about that. Will has already queued the patch.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-03-23  9:35 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-22  8:55 [PATCH] arm64: kaslr: Add 2MB correction for aligning kernel image Srinivas Ramana
2017-03-22  9:27 ` Ard Biesheuvel
2017-03-22 11:38   ` [PATCH v2] arm64: kaslr: Fix up the kernel image alignment Srinivas Ramana
2017-03-22 12:16     ` Ard Biesheuvel
2017-03-22 12:40       ` Will Deacon
2017-03-22 13:45         ` Srinivas Ramana
2017-03-23  9:32           ` Srinivas Ramana
2017-03-23  9:34             ` Ard Biesheuvel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).