All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dmitry Vyukov <dvyukov@google.com>
To: Daniel Axtens <dja@axtens.net>
Cc: kasan-dev <kasan-dev@googlegroups.com>,
	Linux-MM <linux-mm@kvack.org>,
	 "the arch/x86 maintainers" <x86@kernel.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	 Alexander Potapenko <glider@google.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH 1/3] kasan: support backing vmalloc space with real shadow memory
Date: Mon, 29 Jul 2019 12:28:26 +0200	[thread overview]
Message-ID: <CACT4Y+YSNdQdUbQS4K8NxuQf7AmbK1SXx0ZdLtM3cfcY6Dpv2A@mail.gmail.com> (raw)
In-Reply-To: <87blxdgn9k.fsf@dja-thinkpad.axtens.net>

On Mon, Jul 29, 2019 at 12:15 PM Daniel Axtens <dja@axtens.net> wrote:
>
> Hi Dmitry,
>
> Thanks for the feedback!
>
> >> +       addr = shadow_alloc_start;
> >> +       do {
> >> +               pgdp = pgd_offset_k(addr);
> >> +               p4dp = p4d_alloc(&init_mm, pgdp, addr);
> >
> > Page table allocations will be protected by mm->page_table_lock, right?
>
> Yes, each of those alloc functions take the lock if they end up in the
> slow-path that does the actual allocation (e.g. __p4d_alloc()).
>
> >> +               pudp = pud_alloc(&init_mm, p4dp, addr);
> >> +               pmdp = pmd_alloc(&init_mm, pudp, addr);
> >> +               ptep = pte_alloc_kernel(pmdp, addr);
> >> +
> >> +               /*
> >> +                * we can validly get here if pte is not none: it means we
> >> +                * allocated this page earlier to use part of it for another
> >> +                * allocation
> >> +                */
> >> +               if (pte_none(*ptep)) {
> >> +                       backing = __get_free_page(GFP_KERNEL);
> >> +                       backing_pte = pfn_pte(PFN_DOWN(__pa(backing)),
> >> +                                             PAGE_KERNEL);
> >> +                       set_pte_at(&init_mm, addr, ptep, backing_pte);
> >> +               }
> >> +       } while (addr += PAGE_SIZE, addr != shadow_alloc_end);
> >> +
> >> +       requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> >> +       kasan_unpoison_shadow(area->addr, requested_size);
> >> +       kasan_poison_shadow(area->addr + requested_size,
> >> +                           area->size - requested_size,
> >> +                           KASAN_VMALLOC_INVALID);
> >
> >
> > Do I read this correctly that if kernel code does vmalloc(64), they
> > will have exactly 64 bytes available rather than full page? To make
> > sure: vmalloc does not guarantee that the available size is rounded up
> > to page size? I suspect we will see a throw out of new bugs related to
> > OOBs on vmalloc memory. So I want to make sure that these will be
> > indeed bugs that we agree need to be fixed.
> > I am sure there will be bugs where the size is controlled by
> > user-space, so these are bad bugs under any circumstances. But there
> > will also probably be OOBs, where people will try to "prove" that
> > that's fine and will work (just based on our previous experiences :)).
>
> So the implementation of vmalloc will always round it up. The
> description of the function reads, in part:
>
>  * Allocate enough pages to cover @size from the page level
>  * allocator and map them into contiguous kernel virtual space.
>
> So in short it's not quite clear - you could argue that you have a
> guarantee that you get full pages, but you could also argue that you've
> specifically asked for @size bytes and @size bytes only.
>
> So far it seems that users are well behaved in terms of using the amount
> of memory they ask for, but you'll get a better idea than me very
> quickly as I only tested with trinity. :)

Ack.
Let's try and see then. There is always an easy fix -- round up size
explicitly before vmalloc, which will make the code more explicit and
clear. I can hardly see any potential downsides for rounding up the
size explicitly.

> I also handle vmap - for vmap there's no way to specify sub-page
> allocations so you get as many pages as you ask for.
>
> > On impl side: kasan_unpoison_shadow seems to be capable of handling
> > non-KASAN_SHADOW_SCALE_SIZE-aligned sizes exactly in the way we want.
> > So I think it's better to do:
> >
> >        kasan_unpoison_shadow(area->addr, requested_size);
> >        requested_size = round_up(requested_size, KASAN_SHADOW_SCALE_SIZE);
> >        kasan_poison_shadow(area->addr + requested_size,
> >                            area->size - requested_size,
> >                            KASAN_VMALLOC_INVALID);
>
> Will do for v2.
>
> Regards,
> Daniel
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/87blxdgn9k.fsf%40dja-thinkpad.axtens.net.


  reply	other threads:[~2019-07-29 10:28 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-25  5:55 [PATCH 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-07-25  5:55 ` [PATCH 1/3] " Daniel Axtens
2019-07-25  7:35   ` Dmitry Vyukov
2019-07-25  7:51     ` Dmitry Vyukov
2019-07-25 10:06       ` Marco Elver
2019-07-25 10:11         ` Mark Rutland
2019-07-25 11:38           ` Marco Elver
2019-07-25 15:25         ` Daniel Axtens
2019-07-26  5:11           ` Daniel Axtens
2019-07-26  9:55             ` Marco Elver
2019-07-26 10:32       ` Marco Elver
2019-07-29 10:15     ` Daniel Axtens
2019-07-29 10:28       ` Dmitry Vyukov [this message]
2019-07-25  5:55 ` [PATCH 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-07-25  7:37   ` Dmitry Vyukov
2019-07-25  5:55 ` [PATCH 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-07-25  7:49   ` Dmitry Vyukov
2019-07-25 15:08     ` Andy Lutomirski
2019-07-25 15:39       ` Daniel Axtens
2019-07-25 16:32         ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CACT4Y+YSNdQdUbQS4K8NxuQf7AmbK1SXx0ZdLtM3cfcY6Dpv2A@mail.gmail.com \
    --to=dvyukov@google.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=dja@axtens.net \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.