From: Daniel Axtens <dja@axtens.net>
To: Mark Rutland <mark.rutland@arm.com>
Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org,
aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org,
linux-kernel@vger.kernel.org, dvyukov@google.com
Subject: Re: [PATCH v2 1/3] kasan: support backing vmalloc space with real shadow memory
Date: Tue, 30 Jul 2019 18:38:47 +1000 [thread overview]
Message-ID: <877e7zhq7c.fsf@dja-thinkpad.axtens.net> (raw)
In-Reply-To: <20190729154426.GA51922@lakrids.cambridge.arm.com>
Hi Mark,
Thanks for your email - I'm very new to mm stuff and the feedback is
very helpful.
>> +#ifndef CONFIG_KASAN_VMALLOC
>> int kasan_module_alloc(void *addr, size_t size)
>> {
>> void *ret;
>> @@ -603,6 +604,7 @@ void kasan_free_shadow(const struct vm_struct *vm)
>> if (vm->flags & VM_KASAN)
>> vfree(kasan_mem_to_shadow(vm->addr));
>> }
>> +#endif
>
> IIUC we can drop MODULE_ALIGN back to PAGE_SIZE in this case, too.
Yes, done.
>> core_initcall(kasan_memhotplug_init);
>> #endif
>> +
>> +#ifdef CONFIG_KASAN_VMALLOC
>> +void kasan_cover_vmalloc(unsigned long requested_size, struct vm_struct *area)
>
> Nit: I think it would be more consistent to call this
> kasan_populate_vmalloc().
>
Absolutely. I didn't love the name but just didn't 'click' that populate
would be a better verb.
>> +{
>> + unsigned long shadow_alloc_start, shadow_alloc_end;
>> + unsigned long addr;
>> + unsigned long backing;
>> + pgd_t *pgdp;
>> + p4d_t *p4dp;
>> + pud_t *pudp;
>> + pmd_t *pmdp;
>> + pte_t *ptep;
>> + pte_t backing_pte;
>
> Nit: I think it would be preferable to use 'page' rather than 'backing',
> and 'pte' rather than 'backing_pte', since there's no otehr namespace to
> collide with here. Otherwise, using 'shadow' rather than 'backing' would
> be consistent with the existing kasan code.
Not a problem, done.
>> + addr = shadow_alloc_start;
>> + do {
>> + pgdp = pgd_offset_k(addr);
>> + p4dp = p4d_alloc(&init_mm, pgdp, addr);
>> + pudp = pud_alloc(&init_mm, p4dp, addr);
>> + pmdp = pmd_alloc(&init_mm, pudp, addr);
>> + ptep = pte_alloc_kernel(pmdp, addr);
>> +
>> + /*
>> + * we can validly get here if pte is not none: it means we
>> + * allocated this page earlier to use part of it for another
>> + * allocation
>> + */
>> + if (pte_none(*ptep)) {
>> + backing = __get_free_page(GFP_KERNEL);
>> + backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
>> + PAGE_KERNEL);
>> + set_pte_at(&init_mm, addr, ptep, backing_pte);
>> + }
>
> Does anything prevent two threads from racing to allocate the same
> shadow page?
>
> AFAICT it's possible for two threads to get down to the ptep, then both
> see pte_none(*ptep)), then both try to allocate the same page.
>
> I suspect we have to take init_mm::page_table_lock when plumbing this
> in, similarly to __pte_alloc().
Good catch. I think you're right, I'll add the lock.
>> + } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
>> +
>> + kasan_unpoison_shadow(area->addr, requested_size);
>> + requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
>> + kasan_poison_shadow(area->addr + requested_size,
>> + area->size - requested_size,
>> + KASAN_VMALLOC_INVALID);
>
> IIUC, this could leave the final portion of an allocated page
> unpoisoned.
>
> I think it might make more sense to poison each page when it's
> allocated, then plumb it into the page tables, then unpoison the object.
>
> That way, we can rely on any shadow allocated by another thread having
> been initialized to KASAN_VMALLOC_INVALID, and only need mutual
> exclusion when allocating the shadow, rather than when poisoning
> objects.
Yes, that makes sense, will do.
Thanks again,
Daniel
next prev parent reply other threads:[~2019-07-30 8:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-29 14:21 [PATCH v2 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-07-29 14:21 ` [PATCH v2 1/3] " Daniel Axtens
2019-07-29 15:44 ` Mark Rutland
2019-07-30 8:38 ` Daniel Axtens [this message]
2019-07-31 6:34 ` Daniel Axtens
2019-07-29 14:21 ` [PATCH v2 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-07-29 14:21 ` [PATCH v2 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=877e7zhq7c.fsf@dja-thinkpad.axtens.net \
--to=dja@axtens.net \
--cc=aryabinin@virtuozzo.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=kasan-dev@googlegroups.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).