From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753015AbdKWPXE (ORCPT ); Thu, 23 Nov 2017 10:23:04 -0500 Received: from mail.kernel.org ([198.145.29.99]:58084 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752391AbdKWPXD (ORCPT ); Thu, 23 Nov 2017 10:23:03 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC6E5219A2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=luto@kernel.org X-Google-Smtp-Source: AGs4zMb8tKRZh+IvpTcktp0kvifs3VW1T8kLzoS1J1qq+EPbPVRe29Jqzgp7UYd36vWfbHCOnlyiUYPggIWvv6NH4oQ= MIME-Version: 1.0 In-Reply-To: <9e5893bd-097b-005b-342d-8b291aeeda97@virtuozzo.com> References: <02da46906066215f87e15a7a406109e353f4381a.1511325444.git.luto@kernel.org> <8cd2c81e-2cc2-2659-36f5-1e49f755b4a2@virtuozzo.com> <9e5893bd-097b-005b-342d-8b291aeeda97@virtuozzo.com> From: Andy Lutomirski Date: Thu, 23 Nov 2017 07:22:41 -0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH v2 06/18] x86/kasan/64: Teach KASAN about the cpu_entry_area To: Andrey Ryabinin Cc: Andy Lutomirski , X86 ML , Borislav Petkov , "linux-kernel@vger.kernel.org" , Brian Gerst , Dave Hansen , Linus Torvalds , Josh Poimboeuf , Alexander Potapenko , Dmitry Vyukov , kasan-dev Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Nov 23, 2017 at 2:08 AM, Andrey Ryabinin wrote: > > > On 11/22/2017 06:22 PM, Andy Lutomirski wrote: >> On Wed, Nov 22, 2017 at 1:05 AM, Andrey Ryabinin >> wrote: >>> >>> >>> On 11/22/2017 07:44 AM, Andy Lutomirski wrote: >>>> The cpu_entry_area will contain stacks. Make sure that KASAN has >>>> appropriate shadow mappings for them. >>>> >>>> Cc: Andrey Ryabinin >>>> Cc: Alexander Potapenko >>>> Cc: Dmitry Vyukov >>>> Cc: kasan-dev@googlegroups.com >>>> Signed-off-by: Andy Lutomirski >>>> --- >>>> arch/x86/mm/kasan_init_64.c | 9 ++++++++- >>>> 1 file changed, 8 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c >>>> index 99dfed6dfef8..43d376687315 100644 >>>> --- a/arch/x86/mm/kasan_init_64.c >>>> +++ b/arch/x86/mm/kasan_init_64.c >>>> @@ -330,7 +330,14 @@ void __init kasan_init(void) >>>> early_pfn_to_nid(__pa(_stext))); >>>> >>>> kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), >>>> - (void *)KASAN_SHADOW_END); >>>> + kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM)))); >>>> + >>>> + kasan_populate_shadow((unsigned long)kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM))), >>>> + (unsigned long)kasan_mem_to_shadow((void *)(__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + PAGE_SIZE)), >>> >>> What's '+ PAGE_SIZE' for? >>> >> >> __fix_to_virt(..._TOP) returns the address of the *bottom* of the last >> cpu_entry_area page. +PAGE_SIZE returns one past the end of the >> region, which I assume is the correct thing to pass. >> > > Right. > > Perhaps, it would be better to use variables, just avoid such awfully long lines, I mean like this: > fixmap_shadow_start = (void *)__fix_to_virt(FIX_CPU_ENTRY_AREA_BOTTOM); > fixmap_shadow_start = kasan_mem_to_shadow(fixmap_shadow_start); > > fixmap_shadow_end = (void *)__fix_to_virt(FIX_CPU_ENTRY_AREA_TOP) + PAGE_SIZE; > fixmap_shadow_end = kasan_mem_to_shadow(fixmap_shadow_end); > > kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END), > fixmap_shadow_start); > > kasan_populate_shadow((unsigned long)fixmap_shadow_start, > (unsigned long)fixmap_shadow_end, > 0); > > I'm also thinking that we should change kasan_populate_shadow() to take void* instead of 'unsigned long' > to avoid those casts. I did something similar, but I left the kasan_mem_to_shadow in for consistency with the rest. I think the real way to clean this up is like this: struct kasan_range { void *start; void *end; }; struct kasan_range ranges[] = { { range 1 }, { range 2 }, }; sort(range, ARRAY_SIZE(ranges), sizeof(ranges[0]), ...); last_end = PAGE_OFFSET; /* or whatever is right */ for (i = 0; i < ARRAY_SIZE(ranges); i++) [ WARN_ON(ranges[i].start < last_end); if (ranges[i].start > last_end) kasan_populate_zero_shadow(...); kasan_populate_shadow(...); last_end = ranges[i].end; } kasan_populate_zero_shadow(last_end, the real end); Then the code doesn't need to duplicate each boundary or to hardcode the order in which things appear.