linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: christophe leroy <christophe.leroy@c-s.fr>
To: Andrey Konovalov <andreyknvl@google.com>
Cc: Daniel Axtens <dja@axtens.net>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Nicholas Piggin <npiggin@gmail.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Linux Memory Management List <linux-mm@kvack.org>,
	PowerPC <linuxppc-dev@lists.ozlabs.org>,
	LKML <linux-kernel@vger.kernel.org>,
	kasan-dev <kasan-dev@googlegroups.com>
Subject: Re: [PATCH v4 3/3] powerpc/32: Add KASAN support
Date: Sat, 9 Feb 2019 12:55:12 +0100	[thread overview]
Message-ID: <f8b9e9ec-991b-6824-46c2-f7fc0aaa7fb8@c-s.fr> (raw)
In-Reply-To: <CAAeHK+wcUwLiSQffUkcyiH2fuox=VihJadEqQqRG1YfU3Y2gDA@mail.gmail.com>

Hi Andrey,

Le 08/02/2019 à 18:40, Andrey Konovalov a écrit :
> On Fri, Feb 8, 2019 at 6:17 PM Christophe Leroy <christophe.leroy@c-s.fr> wrote:
>>
>> Hi Daniel,
>>
>> Le 08/02/2019 à 17:18, Daniel Axtens a écrit :
>>> Hi Christophe,
>>>
>>> I've been attempting to port this to 64-bit Book3e nohash (e6500),
>>> although I think I've ended up with an approach more similar to Aneesh's
>>> much earlier (2015) series for book3s.
>>>
>>> Part of this is just due to the changes between 32 and 64 bits - we need
>>> to hack around the discontiguous mappings - but one thing that I'm
>>> particularly puzzled by is what the kasan_early_init is supposed to do.
>>
>> It should be a problem as my patch uses a 'for_each_memblock(memory,
>> reg)' loop.
>>
>>>
>>>> +void __init kasan_early_init(void)
>>>> +{
>>>> +    unsigned long addr = KASAN_SHADOW_START;
>>>> +    unsigned long end = KASAN_SHADOW_END;
>>>> +    unsigned long next;
>>>> +    pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(addr), addr), addr);
>>>> +    int i;
>>>> +    phys_addr_t pa = __pa(kasan_early_shadow_page);
>>>> +
>>>> +    BUILD_BUG_ON(KASAN_SHADOW_START & ~PGDIR_MASK);
>>>> +
>>>> +    if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE))
>>>> +            panic("KASAN not supported with Hash MMU\n");
>>>> +
>>>> +    for (i = 0; i < PTRS_PER_PTE; i++)
>>>> +            __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
>>>> +                         kasan_early_shadow_pte + i,
>>>> +                         pfn_pte(PHYS_PFN(pa), PAGE_KERNEL_RO), 0);
>>>> +
>>>> +    do {
>>>> +            next = pgd_addr_end(addr, end);
>>>> +            pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte);
>>>> +    } while (pmd++, addr = next, addr != end);
>>>> +}
>>>
>>> As far as I can tell it's mapping the early shadow page, read-only, over
>>> the KASAN_SHADOW_START->KASAN_SHADOW_END range, and it's using the early
>>> shadow PTE array from the generic code.
>>>
>>> I haven't been able to find an answer to why this is in the docs, so I
>>> was wondering if you or anyone else could explain the early part of
>>> kasan init a bit better.
>>
>> See https://www.kernel.org/doc/html/latest/dev-tools/kasan.html for an
>> explanation of the shadow.
>>
>> When shadow is 0, it means the memory area is entirely accessible.
>>
>> It is necessary to setup a shadow area as soon as possible because all
>> data accesses check the shadow area, from the begining (except for a few
>> files where sanitizing has been disabled in Makefiles).
>>
>> Until the real shadow area is set, all access are granted thanks to the
>> zero shadow area beeing for of zeros.
> 
> Not entirely correct. kasan_early_init() indeed maps the whole shadow
> memory range to the same kasan_early_shadow_page. However as kernel
> loads and memory gets allocated this shadow page gets rewritten with
> non-zero values by different KASAN allocator hooks. Since these values
> come from completely different parts of the kernel, but all land on
> the same page, kasan_early_shadow_page's content can be considered
> garbage. When KASAN checks memory accesses for validity it detects
> these garbage shadow values, but doesn't print any reports, as the
> reporting routine bails out on the current->kasan_depth check (which
> has the value of 1 initially). Only after kasan_init() completes, when
> the proper shadow memory is mapped, current->kasan_depth gets set to 0
> and we start reporting bad accesses.

That's surprising, because in the early phase I map the shadow area 
read-only, so I do not expect it to get modified unless RO protection is 
failing for some reason.

Next week I'll add a test in early_init() to check the content of the 
early shadow area.

Christophe

> 
>>
>> I mainly used ARM arch as an exemple when I implemented KASAN for ppc32.
>>
>>>
>>> At the moment, I don't do any early init, and like Aneesh's series for
>>> book3s, I end up needing a special flag to disable kasan until after
>>> kasan_init. Also, as with Balbir's seris for Radix, some tests didn't
>>> fire, although my missing tests are a superset of his. I suspect the
>>> early init has something to do with these...?
>>
>> I think you should really focus on establishing a zero shadow area as
>> early as possible instead of trying to ack the core parts of KASAN.
>>
>>>
>>> (I'm happy to collate answers into a patch to the docs, btw!)
>>
>> We can also have the discussion going via
>> https://github.com/linuxppc/issues/issues/106
>>
>>>
>>> In the long term I hope to revive Aneesh's and Balbir's series for hash
>>> and radix as well.
>>
>> Great.
>>
>> Christophe
>>
>>>
>>> Regards,
>>> Daniel
>>>
>>>> +
>>>> +static void __init kasan_init_region(struct memblock_region *reg)
>>>> +{
>>>> +    void *start = __va(reg->base);
>>>> +    void *end = __va(reg->base + reg->size);
>>>> +    unsigned long k_start, k_end, k_cur, k_next;
>>>> +    pmd_t *pmd;
>>>> +
>>>> +    if (start >= end)
>>>> +            return;
>>>> +
>>>> +    k_start = (unsigned long)kasan_mem_to_shadow(start);
>>>> +    k_end = (unsigned long)kasan_mem_to_shadow(end);
>>>> +    pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start);
>>>> +
>>>> +    for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) {
>>>> +            k_next = pgd_addr_end(k_cur, k_end);
>>>> +            if ((void *)pmd_page_vaddr(*pmd) == kasan_early_shadow_pte) {
>>>> +                    pte_t *new = pte_alloc_one_kernel(&init_mm);
>>>> +
>>>> +                    if (!new)
>>>> +                            panic("kasan: pte_alloc_one_kernel() failed");
>>>> +                    memcpy(new, kasan_early_shadow_pte, PTE_TABLE_SIZE);
>>>> +                    pmd_populate_kernel(&init_mm, pmd, new);
>>>> +            }
>>>> +    };
>>>> +
>>>> +    for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE) {
>>>> +            void *va = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
>>>> +            pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL);
>>>> +
>>>> +            if (!va)
>>>> +                    panic("kasan: memblock_alloc() failed");
>>>> +            pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
>>>> +            pte_update(pte_offset_kernel(pmd, k_cur), ~0, pte_val(pte));
>>>> +    }
>>>> +    flush_tlb_kernel_range(k_start, k_end);
>>>> +}
>>>> +
>>>> +void __init kasan_init(void)
>>>> +{
>>>> +    struct memblock_region *reg;
>>>> +
>>>> +    for_each_memblock(memory, reg)
>>>> +            kasan_init_region(reg);
>>>> +
>>>> +    kasan_init_tags();
>>>> +
>>>> +    /* At this point kasan is fully initialized. Enable error messages */
>>>> +    init_task.kasan_depth = 0;
>>>> +    pr_info("KASAN init done\n");
>>>> +}
>>>> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
>>>> index 33cc6f676fa6..ae7db88b72d6 100644
>>>> --- a/arch/powerpc/mm/mem.c
>>>> +++ b/arch/powerpc/mm/mem.c
>>>> @@ -369,6 +369,10 @@ void __init mem_init(void)
>>>>       pr_info("  * 0x%08lx..0x%08lx  : highmem PTEs\n",
>>>>               PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP));
>>>>    #endif /* CONFIG_HIGHMEM */
>>>> +#ifdef CONFIG_KASAN
>>>> +    pr_info("  * 0x%08lx..0x%08lx  : kasan shadow mem\n",
>>>> +            KASAN_SHADOW_START, KASAN_SHADOW_END);
>>>> +#endif
>>>>    #ifdef CONFIG_NOT_COHERENT_CACHE
>>>>       pr_info("  * 0x%08lx..0x%08lx  : consistent mem\n",
>>>>               IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE);
>>>> --
>>>> 2.13.3
>>
>> --
>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
>> To post to this group, send email to kasan-dev@googlegroups.com.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/69720148-fd19-0810-5a1d-96c45e2ec00c%40c-s.fr.
>> For more options, visit https://groups.google.com/d/optout.

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus


  reply	other threads:[~2019-02-09 11:55 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <cover.1548166824.git.christophe.leroy@c-s.fr>
     [not found] ` <1f5629e03181d0e30efc603f00dad78912991a45.1548166824.git.christophe.leroy@c-s.fr>
2019-02-08 16:18   ` [PATCH v4 3/3] powerpc/32: Add KASAN support Daniel Axtens
2019-02-08 17:17     ` Christophe Leroy
2019-02-08 17:40       ` Andrey Konovalov
2019-02-09 11:55         ` christophe leroy [this message]
2019-02-11 12:25           ` Andrey Konovalov
2019-02-11 16:28             ` Andrey Ryabinin
2019-02-12  1:08               ` Daniel Axtens
2019-02-12 11:38                 ` Christophe Leroy
2019-02-12 12:14                   ` Andrey Ryabinin
2019-02-12 12:02                 ` Andrey Ryabinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f8b9e9ec-991b-6824-46c2-f7fc0aaa7fb8@c-s.fr \
    --to=christophe.leroy@c-s.fr \
    --cc=andreyknvl@google.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=benh@kernel.crashing.org \
    --cc=dja@axtens.net \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).