All of lore.kernel.org
 help / color / mirror / Atom feed
From: aryabinin@virtuozzo.com (Andrey Ryabinin)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area
Date: Tue, 16 Feb 2016 18:36:36 +0300	[thread overview]
Message-ID: <56C34204.60605@virtuozzo.com> (raw)
In-Reply-To: <CAKv+Gu8WE=fTZF0Pou24FKDocgshWu8dJRbaC5SJ11E-6T=2Hw@mail.gmail.com>



On 02/16/2016 06:17 PM, Ard Biesheuvel wrote:
> On 16 February 2016 at 13:59, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>>
>>
>> On 02/15/2016 09:59 PM, Catalin Marinas wrote:
>>> On Mon, Feb 15, 2016 at 05:28:02PM +0300, Andrey Ryabinin wrote:
>>>> On 02/12/2016 07:06 PM, Catalin Marinas wrote:
>>>>> So far, we have:
>>>>>
>>>>> KASAN+for-next/kernmap goes wrong
>>>>> KASAN+UBSAN goes wrong
>>>>>
>>>>> Enabled individually, KASAN, UBSAN and for-next/kernmap seem fine. I may
>>>>> have to trim for-next/core down until we figure out where the problem
>>>>> is.
>>>>>
>>>>> BUG: KASAN: stack-out-of-bounds in find_busiest_group+0x164/0x16a0 at addr ffffffc93665bc8c
>>>>
>>>> Can it be related to TLB conflicts, which supposed to be fixed in
>>>> "arm64: kasan: avoid TLB conflicts" patch from "arm64: mm: rework page
>>>> table creation" series ?
>>>
>>> I can very easily reproduce this with a vanilla 4.5-rc1 series by
>>> enabling inline instrumentation (maybe Mark's theory is true w.r.t.
>>> image size).
>>>
>>> Some information, maybe you can shed some light on this. It seems to
>>> happen only for secondary CPUs on the swapper stack (I think allocated
>>> via fork_idle()). The code generated looks sane to me, so KASAN should
>>> not complain but maybe there is some uninitialised shadow, hence the
>>> error.
>>>
>>> The report:
>>>
>>
>> Actually, the first report is a bit more useful. It shows that shadow memory was corrupted:
>>
>>   ffffffc93665bc00: f1 f1 f1 f1 00 f4 f4 f4 f2 f2 f2 f2 00 00 f1 f1
>>> ffffffc93665bc80: f1 f1 00 00 00 00 f3 f3 00 f4 f4 f4 f3 f3 f3 f3
>>                       ^
>> F1 - left redzone, it indicates start of stack frame
>> F3 - right redzone, it should be the end of stack frame.
>>
>> But here we have the second set of F1s without F3s which should close the first set of F1s.
>> Also those two F3s in the middle cannot be right.
>>
>> So shadow is corrupted.
>> Some hypotheses:
>>
>> 1) We share stack between several tasks (e.g. stack overflow, somehow corrupted SP).
>>     But this probably should cause kernel crash later, after kasan reports.
>>
>> 2) Shadow memory wasn't cleared. GCC poison memory on function entrance and unpoisons it before return.
>>      If we use some tricky way to exit from function this could cause false-positives like that.
>>      E.g. some hand-written assembly return code.
>>
>> 3) Screwed shadow mapping. I think the patch below should uncover such problem.
>> It boot-tested on qemu and didn't show any problem
>>
> 
> I think this patch gives false positive warnings in some cases:
> 
>>
>> ---
>>  arch/arm64/mm/kasan_init.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 55 insertions(+)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index cf038c7..25d685c 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -117,6 +117,59 @@ static void __init cpu_set_ttbr1(unsigned long ttbr1)
>>         : "r" (ttbr1));
>>  }
>>
>> +static void verify_shadow(void)
>> +{
>> +       struct memblock_region *reg;
>> +       int i = 0;
>> +
>> +       for_each_memblock(memory, reg) {
>> +               void *start = (void *)__phys_to_virt(reg->base);
>> +               void *end = (void *)__phys_to_virt(reg->base + reg->size);
>> +               int *shadow_start, *shadow_end;
>> +
>> +               if (start >= end)
>> +                       break;
>> +               shadow_start = (int *)((unsigned long)kasan_mem_to_shadow(start) & ~(PAGE_SIZE - 1));
>> +               shadow_end =  (int *)kasan_mem_to_shadow(end);
> 
> shadow_start and shadow_end can refer to the same page as in the
> previous iteration. For instance, I have these two regions
> 
>   0x00006e090000-0x00006e0adfff [Conventional Memory|   |  |  |  |  |
> |   |WB|WT|WC|UC]
>   0x00006e0ae000-0x00006e0affff [Loader Data        |   |  |  |  |  |
> |   |WB|WT|WC|UC]
> 
> which are covered by different memblocks since the second one is
> marked as MEMBLOCK_NOMAP, due to the fact that it contains the UEFI
> memory map.
> 
> I get the following output
> 
> kasan: screwed shadow mapping 23575, 23573
> 
> which I think is simply a result from the fact the shadow_start refers
> to the same page as in the previous iteration(s)
> 

You are right. 
So we should write 'shadow_start' instead of 'i'.

---
 arch/arm64/mm/kasan_init.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index cf038c7..ee035c2 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -117,6 +117,55 @@ static void __init cpu_set_ttbr1(unsigned long ttbr1)
 	: "r" (ttbr1));
 }
 
+static void verify_shadow(void)
+{
+	struct memblock_region *reg;
+
+	for_each_memblock(memory, reg) {
+		void *start = (void *)__phys_to_virt(reg->base);
+		void *end = (void *)__phys_to_virt(reg->base + reg->size);
+		unsigned long *shadow_start, *shadow_end;
+
+		if (start >= end)
+			break;
+		shadow_start = (unsigned long *)kasan_mem_to_shadow(start);
+		shadow_end =  (unsigned long *)kasan_mem_to_shadow(end);
+		for (; shadow_start < shadow_end; shadow_start += PAGE_SIZE/sizeof(unsigned long)) {
+			*shadow_start = (unsigned long)shadow_start;
+		}
+	}
+
+	for_each_memblock(memory, reg) {
+		void *start = (void *)__phys_to_virt(reg->base);
+		void *end = (void *)__phys_to_virt(reg->base + reg->size);
+		unsigned long *shadow_start, *shadow_end;
+
+		if (start >= end)
+			break;
+		shadow_start = (unsigned long *)kasan_mem_to_shadow(start);
+		shadow_end =  (unsigned long *)kasan_mem_to_shadow(end);
+		for (; shadow_start < shadow_end; shadow_start += PAGE_SIZE/sizeof(unsigned long)) {
+			if (*shadow_start != (unsigned long)shadow_start) {
+				pr_err("screwed shadow mapping %lx, %lx\n", *shadow_start, (unsigned long)shadow_start);
+				goto clear;
+			}
+		}
+	}
+clear:
+	for_each_memblock(memory, reg) {
+		void *start = (void *)__phys_to_virt(reg->base);
+		void *end = (void *)__phys_to_virt(reg->base + reg->size);
+		unsigned long shadow_start, shadow_end;
+
+		if (start >= end)
+			break;
+		shadow_start =  (unsigned long)kasan_mem_to_shadow(start);
+		shadow_end =  (unsigned long)kasan_mem_to_shadow(end);
+		memset((void *)shadow_start, 0, shadow_end - shadow_start);
+	}
+
+}
+
 void __init kasan_init(void)
 {
 	struct memblock_region *reg;
@@ -159,6 +208,8 @@ void __init kasan_init(void)
 	cpu_set_ttbr1(__pa(swapper_pg_dir));
 	flush_tlb_all();
 
+	verify_shadow();
+
 	/* At this point kasan is fully initialized. Enable error messages */
 	init_task.kasan_depth = 0;
 	pr_info("KernelAddressSanitizer initialized\n");
-- 

  reply	other threads:[~2016-02-16 15:36 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-01 10:54 [PATCH v5sub1 0/8] arm64: split linear and kernel mappings Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 1/8] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 2/8] arm64: add support for ioremap() block mappings Ard Biesheuvel
2016-02-01 14:10   ` Mark Rutland
2016-02-01 14:56     ` Catalin Marinas
2016-02-01 10:54 ` [PATCH v5sub1 3/8] arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 4/8] arm64: pgtable: implement static [pte|pmd|pud]_offset variants Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 5/8] arm64: decouple early fixmap init from linear mapping Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 6/8] arm64: kvm: deal with kernel symbols outside of " Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area Ard Biesheuvel
2016-02-01 12:24   ` Catalin Marinas
2016-02-01 12:27     ` Ard Biesheuvel
2016-02-01 13:41       ` Catalin Marinas
2016-02-01 14:32   ` Mark Rutland
2016-02-12 14:58   ` Catalin Marinas
2016-02-12 15:02     ` Ard Biesheuvel
2016-02-12 15:10       ` Catalin Marinas
2016-02-12 15:17         ` Ard Biesheuvel
2016-02-12 15:26           ` Catalin Marinas
2016-02-12 15:38             ` Sudeep Holla
2016-02-12 16:06               ` Catalin Marinas
2016-02-12 16:44                 ` Ard Biesheuvel
2016-02-15 14:28                 ` Andrey Ryabinin
2016-02-15 14:35                   ` Mark Rutland
2016-02-15 18:59                   ` Catalin Marinas
2016-02-16 12:59                     ` Andrey Ryabinin
2016-02-16 14:12                       ` Mark Rutland
2016-02-16 14:29                         ` Mark Rutland
2016-02-16 15:17                       ` Ard Biesheuvel
2016-02-16 15:36                         ` Andrey Ryabinin [this message]
2016-02-16 16:42                           ` Mark Rutland
2016-02-17  9:15                             ` Andrey Ryabinin
2016-02-17 10:10                               ` James Morse
2016-02-17 10:19                                 ` Catalin Marinas
2016-02-17 10:36                                   ` Catalin Marinas
2016-02-17 10:18                               ` Catalin Marinas
2016-02-17 10:48                                 ` Mark Rutland
2016-02-17 14:39                       ` Mark Rutland
2016-02-17 16:31                         ` Andrey Ryabinin
2016-02-17 19:35                           ` Mark Rutland
2016-02-17 17:01                         ` KASAN issues with idle / hotplug area (was: Re: [PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area) Mark Rutland
2016-02-17 17:56                           ` Mark Rutland
2016-02-17 19:16                             ` Mark Rutland
2016-02-18  8:06                               ` Ard Biesheuvel
2016-02-18  8:22                               ` KASAN issues with idle / hotplug area Andrey Ryabinin
2016-02-18  8:42                                 ` Andrey Ryabinin
2016-02-18  9:38                                 ` Andrey Ryabinin
2016-02-18 11:34                                   ` Mark Rutland
2016-02-18  9:39                                 ` Lorenzo Pieralisi
2016-02-18 11:38                                   ` Mark Rutland
2016-02-18 11:45                                   ` Andrey Ryabinin
2016-02-18 11:15                                 ` Mark Rutland
2016-02-18 11:46                                   ` Andrey Ryabinin
2016-02-18 12:08                                     ` Mark Rutland
2016-02-12 17:47   ` [PATCH v5sub1 7/8] arm64: move kernel image to base of vmalloc area James Morse
2016-02-12 18:01     ` Ard Biesheuvel
2016-02-01 10:54 ` [PATCH v5sub1 8/8] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
2016-02-01 14:50   ` Mark Rutland
2016-02-01 16:28     ` Fu Wei
2016-02-16  8:55       ` Fu Wei
2016-02-01 15:06   ` Catalin Marinas
2016-02-01 15:13     ` Ard Biesheuvel
2016-02-01 16:31       ` Ard Biesheuvel
2016-02-01 17:31         ` Catalin Marinas
2016-02-01 17:57           ` Ard Biesheuvel
2016-02-01 18:02             ` Catalin Marinas
2016-02-01 18:30               ` [PATCH] arm64: move back to generic memblock_enforce_memory_limit() Ard Biesheuvel
2016-02-02 10:19                 ` Catalin Marinas
2016-02-02 10:28                   ` Ard Biesheuvel
2016-02-02 10:44                     ` Catalin Marinas
2016-02-12 19:45 ` [PATCH v5sub1 0/8] arm64: split linear and kernel mappings Matthias Brugger
2016-02-12 19:47   ` Ard Biesheuvel
2016-02-12 20:10     ` Matthias Brugger
2016-02-12 20:37       ` Ard Biesheuvel
2016-02-13 14:28       ` Ard Biesheuvel
2016-02-15 13:29         ` Matthias Brugger
2016-02-15 13:40           ` Will Deacon
2016-02-15 14:58           ` Ard Biesheuvel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56C34204.60605@virtuozzo.com \
    --to=aryabinin@virtuozzo.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.