From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753995AbdKXQ1d (ORCPT ); Fri, 24 Nov 2017 11:27:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:43448 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753486AbdKXQ1b (ORCPT ); Fri, 24 Nov 2017 11:27:31 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AE7D219AF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AGs4zMYn7sI3AbZyJZLRd/QZ3Xih6PL8Va2I9REqDROIUd0/EcIxUPLl2Lzi1szGZxPNl/rcLOGmj37RUPerppnnNm4= MIME-Version: 1.0 In-Reply-To: <527f205f-0e2f-36c4-25a1-f9d5c55260bc@virtuozzo.com> References: <8407adf9126440d6467dade88fdb3e3b75fc1019.1511497875.git.luto@kernel.org> <527f205f-0e2f-36c4-25a1-f9d5c55260bc@virtuozzo.com> From: Andy Lutomirski Date: Fri, 24 Nov 2017 08:27:09 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v3 05/19] x86/kasan/64: Teach KASAN about the cpu_entry_area To: Andrey Ryabinin Cc: Andy Lutomirski , X86 ML , Borislav Petkov , "linux-kernel@vger.kernel.org" , Brian Gerst , Dave Hansen , Linus Torvalds , Josh Poimboeuf , Alexander Potapenko , Dmitry Vyukov , kasan-dev Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 24, 2017 at 5:16 AM, Andrey Ryabinin wrote: > > > On 11/24/2017 07:32 AM, Andy Lutomirski wrote: >> The cpu_entry_area will contain stacks. Make sure that KASAN has >> appropriate shadow mappings for them. >> >> Cc: Andrey Ryabinin >> Cc: Alexander Potapenko >> Cc: Dmitry Vyukov >> Cc: kasan-dev@googlegroups.com >> Signed-off-by: Andy Lutomirski >> --- >> arch/x86/mm/kasan_init_64.c | 13 ++++++++++++- >> 1 file changed, 12 insertions(+), 1 deletion(-) >> >> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c >> index 99dfed6dfef8..54561dce742e 100644 >> --- a/arch/x86/mm/kasan_init_64.c >> +++ b/arch/x86/mm/kasan_init_64.c >> @@ -277,6 +277,7 @@ void __init kasan_early_init(void) >> void __init kasan_init(void) >> { >> int i; >> + void *cpu_entry_area_begin, *cpu_entry_area_end; >> >> #ifdef CONFIG_KASAN_INLINE >> register_die_notifier(&kasan_die_notifier); >> @@ -329,8 +330,18 @@ void __init kasan_init(void) >> (unsigned long)kasan_mem_to_shadow(_end), >> early_pfn_to_nid(__pa(_stext))); >> >> + cpu_entry_area_begin = (void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM)); >> + cpu_entry_area_end = (void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + PAGE_SIZE); >> + >> kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), >> - (void *)KASAN_SHADOW_END); >> + kasan_mem_to_shadow(cpu_entry_area_begin)); >> + >> + kasan_populate_shadow((unsigned long)kasan_mem_to_shadow(cpu_entry_area_begin), >> + (unsigned long)kasan_mem_to_shadow(cpu_entry_area_end), >> + 0); >> + >> + kasan_populate_zero_shadow(kasan_mem_to_shadow(cpu_entry_area_end), > > Seems we need to round_up kasan_mem_to_shadow(cpu_entry_area_end) to the next page > (or alternatively - round_up(cpu_entry_area_end, KASAN_SHADOW_SCALE_SIZE*PAGE_SIZE)). > Otherwise, kasan_populate_zero_shadow() will overpopulate the last shadow page of cpu_entry area with kasan_zero_page. > > We don't necessarily need to round_down(kasan_mem_to_shadow(cpu_entry_area_begin), PAGE_SIZE) because > kasan_populate_zero_shadow() will not populate the last 'incomplete' page and kasan_populate_shadow() > does round_down() internally, which is exactly what we want here. But it might be better to round_down() > explicitly anyway, to avoid relying on such subtle implementation details. Any chance you could send a fixup patch or a replacement patch? You obviously understand this code *way* better than I do. Or you could do my table-based approach and fix it permanently... :) > >> + (void *)KASAN_SHADOW_END); >> >> load_cr3(init_top_pgt); >> __flush_tlb_all(); >>