From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9C84C31E40 for ; Thu, 15 Aug 2019 11:28:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A60752064A for ; Thu, 15 Aug 2019 11:28:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731552AbfHOL2t (ORCPT ); Thu, 15 Aug 2019 07:28:49 -0400 Received: from foss.arm.com ([217.140.110.172]:42656 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728128AbfHOL2t (ORCPT ); Thu, 15 Aug 2019 07:28:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4A2E0360; Thu, 15 Aug 2019 04:28:48 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD5DA3F694; Thu, 15 Aug 2019 04:28:46 -0700 (PDT) Date: Thu, 15 Aug 2019 12:28:44 +0100 From: Mark Rutland To: Daniel Axtens Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, dvyukov@google.com, linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com Subject: Re: [PATCH v4 0/3] kasan: support backing vmalloc space with real shadow memory Message-ID: <20190815112844.GC22153@lakrids.cambridge.arm.com> References: <20190815001636.12235-1-dja@axtens.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190815001636.12235-1-dja@axtens.net> User-Agent: Mutt/1.11.1+11 (2f07cb52) (2018-12-01) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Aug 15, 2019 at 10:16:33AM +1000, Daniel Axtens wrote: > Currently, vmalloc space is backed by the early shadow page. This > means that kasan is incompatible with VMAP_STACK, and it also provides > a hurdle for architectures that do not have a dedicated module space > (like powerpc64). > > This series provides a mechanism to back vmalloc space with real, > dynamically allocated memory. I have only wired up x86, because that's > the only currently supported arch I can work with easily, but it's > very easy to wire up other architectures. I'm happy to send patches for arm64 once we've settled some conflicting rework going on for 52-bit VA support. > > This has been discussed before in the context of VMAP_STACK: > - https://bugzilla.kernel.org/show_bug.cgi?id=202009 > - https://lkml.org/lkml/2018/7/22/198 > - https://lkml.org/lkml/2019/7/19/822 > > In terms of implementation details: > > Most mappings in vmalloc space are small, requiring less than a full > page of shadow space. Allocating a full shadow page per mapping would > therefore be wasteful. Furthermore, to ensure that different mappings > use different shadow pages, mappings would have to be aligned to > KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. > > Instead, share backing space across multiple mappings. Allocate > a backing page the first time a mapping in vmalloc space uses a > particular page of the shadow region. Keep this page around > regardless of whether the mapping is later freed - in the mean time > the page could have become shared by another vmalloc mapping. > > This can in theory lead to unbounded memory growth, but the vmalloc > allocator is pretty good at reusing addresses, so the practical memory > usage appears to grow at first but then stay fairly stable. > > If we run into practical memory exhaustion issues, I'm happy to > consider hooking into the book-keeping that vmap does, but I am not > convinced that it will be an issue. FWIW, I haven't spotted such memory exhaustion after a week of Syzkaller fuzzing with the last patchset, across 3 machines, so that sounds fine to me. Otherwise, this looks good to me now! For the x86 and fork patch, feel free to add: Acked-by: Mark Rutland Mark. > > v1: https://lore.kernel.org/linux-mm/20190725055503.19507-1-dja@axtens.net/ > v2: https://lore.kernel.org/linux-mm/20190729142108.23343-1-dja@axtens.net/ > Address review comments: > - Patch 1: use kasan_unpoison_shadow's built-in handling of > ranges that do not align to a full shadow byte > - Patch 3: prepopulate pgds rather than faulting things in > v3: https://lore.kernel.org/linux-mm/20190731071550.31814-1-dja@axtens.net/ > Address comments from Mark Rutland: > - kasan_populate_vmalloc is a better name > - handle concurrency correctly > - various nits and cleanups > - relax module alignment in KASAN_VMALLOC case > v4: Changes to patch 1 only: > - Integrate Mark's rework, thanks Mark! > - handle the case where kasan_populate_shadow might fail > - poision shadow on free, allowing the alloc path to just > unpoision memory that it uses > > Daniel Axtens (3): > kasan: support backing vmalloc space with real shadow memory > fork: support VMAP_STACK with KASAN_VMALLOC > x86/kasan: support KASAN_VMALLOC > > Documentation/dev-tools/kasan.rst | 60 +++++++++++++++++++++++++++ > arch/Kconfig | 9 +++-- > arch/x86/Kconfig | 1 + > arch/x86/mm/kasan_init_64.c | 61 ++++++++++++++++++++++++++++ > include/linux/kasan.h | 24 +++++++++++ > include/linux/moduleloader.h | 2 +- > include/linux/vmalloc.h | 12 ++++++ > kernel/fork.c | 4 ++ > lib/Kconfig.kasan | 16 ++++++++ > lib/test_kasan.c | 26 ++++++++++++ > mm/kasan/common.c | 67 +++++++++++++++++++++++++++++++ > mm/kasan/generic_report.c | 3 ++ > mm/kasan/kasan.h | 1 + > mm/vmalloc.c | 28 ++++++++++++- > 14 files changed, 308 insertions(+), 6 deletions(-) > > -- > 2.20.1 >