All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Axtens <dja@axtens.net>
To: Andy Lutomirski <luto@amacapital.net>,
	Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev <kasan-dev@googlegroups.com>,
	Linux-MM <linux-mm@kvack.org>,
	the arch/x86 maintainers <x86@kernel.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH 3/3] x86/kasan: support KASAN_VMALLOC
Date: Fri, 26 Jul 2019 01:39:36 +1000	[thread overview]
Message-ID: <87lfwmgm2v.fsf@dja-thinkpad.axtens.net> (raw)
In-Reply-To: <D7AC2D28-596F-4B9E-B4AD-B03D8485E9F1@amacapital.net>


>> Would it make things simpler if we pre-populate the top level page
>> tables for the whole vmalloc region? That would be
>> (16<<40)/4096/512/512*8 = 131072 bytes?
>> The check in vmalloc_fault in not really a big burden, so I am not
>> sure. Just brining as an option.
>
> I prefer pre-populating them. In particular, I have already spent far too much time debugging the awful explosions when the stack doesn’t have KASAN backing, and the vmap stack code is very careful to pre-populate the stack pgds — vmalloc_fault fundamentally can’t recover when the stack itself isn’t mapped.
>
> So the vmalloc_fault code, if it stays, needs some careful analysis to make sure it will actually survive all the various context switch cases.  Or you can pre-populate it.
>

No worries - I'll have another crack at prepopulating them for v2. 

I tried prepopulating them at first, but because I'm really a powerpc
developer rather than an x86 developer (and because I find mm code
confusing at the best of times) I didn't have a lot of luck. I think on
reflection I stuffed up the pgd/p4d stuff and I think I know how to fix
it. So I'll give it another go and ask for help here if I get stuck :)

Regards,
Daniel


>> 
>> Acked-by: Dmitry Vyukov <dvyukov@google.com>
>> 
>>> ---
>>> arch/x86/Kconfig            |  1 +
>>> arch/x86/mm/fault.c         | 13 +++++++++++++
>>> arch/x86/mm/kasan_init_64.c | 10 ++++++++++
>>> 3 files changed, 24 insertions(+)
>>> 
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index 222855cc0158..40562cc3771f 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -134,6 +134,7 @@ config X86
>>>        select HAVE_ARCH_JUMP_LABEL
>>>        select HAVE_ARCH_JUMP_LABEL_RELATIVE
>>>        select HAVE_ARCH_KASAN                  if X86_64
>>> +       select HAVE_ARCH_KASAN_VMALLOC          if X86_64
>>>        select HAVE_ARCH_KGDB
>>>        select HAVE_ARCH_MMAP_RND_BITS          if MMU
>>>        select HAVE_ARCH_MMAP_RND_COMPAT_BITS   if MMU && COMPAT
>>> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
>>> index 6c46095cd0d9..d722230121c3 100644
>>> --- a/arch/x86/mm/fault.c
>>> +++ b/arch/x86/mm/fault.c
>>> @@ -340,8 +340,21 @@ static noinline int vmalloc_fault(unsigned long address)
>>>        pte_t *pte;
>>> 
>>>        /* Make sure we are in vmalloc area: */
>>> +#ifndef CONFIG_KASAN_VMALLOC
>>>        if (!(address >= VMALLOC_START && address < VMALLOC_END))
>>>                return -1;
>>> +#else
>>> +       /*
>>> +        * Some of the shadow mapping for the vmalloc area lives outside the
>>> +        * pgds populated by kasan init. They are created dynamically and so
>>> +        * we may need to fault them in.
>>> +        *
>>> +        * You can observe this with test_vmalloc's align_shift_alloc_test
>>> +        */
>>> +       if (!((address >= VMALLOC_START && address < VMALLOC_END) ||
>>> +             (address >= KASAN_SHADOW_START && address < KASAN_SHADOW_END)))
>>> +               return -1;
>>> +#endif
>>> 
>>>        /*
>>>         * Copy kernel mappings over when needed. This can also
>>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>>> index 296da58f3013..e2fe1c1b805c 100644
>>> --- a/arch/x86/mm/kasan_init_64.c
>>> +++ b/arch/x86/mm/kasan_init_64.c
>>> @@ -352,9 +352,19 @@ void __init kasan_init(void)
>>>        shadow_cpu_entry_end = (void *)round_up(
>>>                        (unsigned long)shadow_cpu_entry_end, PAGE_SIZE);
>>> 
>>> +       /*
>>> +        * If we're in full vmalloc mode, don't back vmalloc space with early
>>> +        * shadow pages.
>>> +        */
>>> +#ifdef CONFIG_KASAN_VMALLOC
>>> +       kasan_populate_early_shadow(
>>> +               kasan_mem_to_shadow((void *)VMALLOC_END+1),
>>> +               shadow_cpu_entry_begin);
>>> +#else
>>>        kasan_populate_early_shadow(
>>>                kasan_mem_to_shadow((void *)PAGE_OFFSET + MAXMEM),
>>>                shadow_cpu_entry_begin);
>>> +#endif
>>> 
>>>        kasan_populate_shadow((unsigned long)shadow_cpu_entry_begin,
>>>                              (unsigned long)shadow_cpu_entry_end, 0);
>>> --
>>> 2.20.1
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20190725055503.19507-4-dja%40axtens.net.


  reply	other threads:[~2019-07-25 15:39 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-25  5:55 [PATCH 0/3] kasan: support backing vmalloc space with real shadow memory Daniel Axtens
2019-07-25  5:55 ` [PATCH 1/3] " Daniel Axtens
2019-07-25  7:35   ` Dmitry Vyukov
2019-07-25  7:51     ` Dmitry Vyukov
2019-07-25 10:06       ` Marco Elver
2019-07-25 10:11         ` Mark Rutland
2019-07-25 11:38           ` Marco Elver
2019-07-25 15:25         ` Daniel Axtens
2019-07-26  5:11           ` Daniel Axtens
2019-07-26  9:55             ` Marco Elver
2019-07-26 10:32       ` Marco Elver
2019-07-29 10:15     ` Daniel Axtens
2019-07-29 10:28       ` Dmitry Vyukov
2019-07-25  5:55 ` [PATCH 2/3] fork: support VMAP_STACK with KASAN_VMALLOC Daniel Axtens
2019-07-25  7:37   ` Dmitry Vyukov
2019-07-25  5:55 ` [PATCH 3/3] x86/kasan: support KASAN_VMALLOC Daniel Axtens
2019-07-25  7:49   ` Dmitry Vyukov
2019-07-25 15:08     ` Andy Lutomirski
2019-07-25 15:39       ` Daniel Axtens [this message]
2019-07-25 16:32         ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87lfwmgm2v.fsf@dja-thinkpad.axtens.net \
    --to=dja@axtens.net \
    --cc=aryabinin@virtuozzo.com \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-mm@kvack.org \
    --cc=luto@amacapital.net \
    --cc=luto@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.