From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DC9FC4708C for ; Fri, 28 May 2021 10:25:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 221B16135F for ; Fri, 28 May 2021 10:25:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 221B16135F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A6F8A6B006C; Fri, 28 May 2021 06:25:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A45746B006E; Fri, 28 May 2021 06:25:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E6936B0070; Fri, 28 May 2021 06:25:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 5CA6C6B006C for ; Fri, 28 May 2021 06:25:19 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E3FC28249980 for ; Fri, 28 May 2021 10:25:18 +0000 (UTC) X-FDA: 78190257516.29.B878653 Received: from mail-ed1-f45.google.com (mail-ed1-f45.google.com [209.85.208.45]) by imf19.hostedemail.com (Postfix) with ESMTP id 48967900145A for ; Fri, 28 May 2021 10:25:08 +0000 (UTC) Received: by mail-ed1-f45.google.com with SMTP id g7so4274261edm.4 for ; Fri, 28 May 2021 03:25:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=jvOrsq1bZ9hPI+LkZFIMV+eppH2xjmO/EjGHmij1Tc8=; b=Cvo2zO/EpQN3Mgo7QYGqnOvuNEb0xJ0j2DH2H/dP0b5d+QbpJUSRw5T3KtKWicZ7Xg Xycz82sc9YCZIrnvsNiW0WHjQ372tpHjs8mR1Cx7QM+D+pPUUmTfL/BtckLnEGMy9hiL /KZbbzD/L/5LiyvPsCWvuL1rsKSIiy/E5R+dm5Pd2EoPqcH916SWicU3NE6Sv+tLyDGT n5kpn4p2D1VCBLD2Xme8ANBqPLALFY14zZ5EY0EuwENThyT6/+7fPNpaSlDkuNP2XJW1 r7PW817FJBf+mOlft4nSDgoitBSz8hU/c5PP1eLF4Uto+vMMhekXxyhzJZQ3+JFsslG+ u3fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=jvOrsq1bZ9hPI+LkZFIMV+eppH2xjmO/EjGHmij1Tc8=; b=ad8LG05CUFFXAm48nJfiIvoqXbR5c9N9EZ18LKiQJkZhUU71FwYCvMakRIYdcYwgzZ aFiOAj/5RicdiYG52GH2PWNR4sZNKbJELlk7BCERMq2GSaUvDl1rNdaa/Eo5x3klqwNs adFortH3zb8epr1lp9CIvYX/hoJLlmqFqXq03AwrjdA+9ws0hCQAsDGYEN6CygHfDf0B K2oIlia+MycOE1KLV0o8TBX71QHSayH3otZE/j2XQpYXfHR6IKqBUSrQSUz149j0VqdA UIkA3G5CVSL9oCtW2uWYhE0FblMXp9XfjTG0K7dVddqXXQVOqKo037tWtGZg9oarCqvg FfuA== X-Gm-Message-State: AOAM531loSDk87BGtOOip1jGVu7roXx36IG0Eaub4VJC1ubkaRdWe34+ sZofKVZ5FCbQ7PMgJAoqv772U/Ojv7K9nX97PvriKZLt X-Google-Smtp-Source: ABdhPJy0downBkcUoiNcCG3LIqdIP+mTS6JeCtBozI98KwAF35O7hHqIIDJRF4wDhkjXi0Titm6OQ71TT2NGk2z63qI= X-Received: by 2002:aa7:c0c4:: with SMTP id j4mr9055712edp.168.1622197517340; Fri, 28 May 2021 03:25:17 -0700 (PDT) MIME-Version: 1.0 References: <20210528010415.1852012-1-pcc@google.com> <20210528010415.1852012-3-pcc@google.com> In-Reply-To: <20210528010415.1852012-3-pcc@google.com> From: Andrey Konovalov Date: Fri, 28 May 2021 13:25:00 +0300 Message-ID: Subject: Re: [PATCH v4 2/4] kasan: use separate (un)poison implementation for integrated init To: Peter Collingbourne Cc: Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn , Evgenii Stepanov , Linux Memory Management List , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b="Cvo2zO/E"; spf=pass (imf19.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.208.45 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 48967900145A X-Stat-Signature: c6uj4yhmcw3793zfdu1ra1i159k8r7b1 X-HE-Tag: 1622197508-375815 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 28, 2021 at 4:04 AM Peter Collingbourne wrote: > > Currently with integrated init page_alloc.c needs to know whether > kasan_alloc_pages() will zero initialize memory, but this will start > becoming more complicated once we start adding tag initialization > support for user pages. To avoid page_alloc.c needing to know more > details of what integrated init will do, move the unpoisoning logic > for integrated init into the HW tags implementation. Currently the > logic is identical but it will diverge in subsequent patches. > > For symmetry do the same for poisoning although this logic will > be unaffected by subsequent patches. > > Signed-off-by: Peter Collingbourne > Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 > --- > v4: > - use IS_ENABLED(CONFIG_KASAN) > - add comments to kasan_alloc_pages and kasan_free_pages > - remove line break > > v3: > - use BUILD_BUG() > > v2: > - fix build with KASAN disabled > > include/linux/kasan.h | 64 +++++++++++++++++++++++++------------------ > mm/kasan/common.c | 4 +-- > mm/kasan/hw_tags.c | 22 +++++++++++++++ > mm/mempool.c | 6 ++-- > mm/page_alloc.c | 55 +++++++++++++++++++------------------ > 5 files changed, 95 insertions(+), 56 deletions(-) > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > index b1678a61e6a7..a1c7ce5f3e4f 100644 > --- a/include/linux/kasan.h > +++ b/include/linux/kasan.h > @@ -2,6 +2,7 @@ > #ifndef _LINUX_KASAN_H > #define _LINUX_KASAN_H > > +#include > #include > #include > > @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} > > #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ > > -#ifdef CONFIG_KASAN > - > -struct kasan_cache { > - int alloc_meta_offset; > - int free_meta_offset; > - bool is_kmalloc; > -}; > - > #ifdef CONFIG_KASAN_HW_TAGS > > DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); > @@ -101,11 +94,14 @@ static inline bool kasan_has_integrated_init(void) > return kasan_enabled(); > } > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); > +void kasan_free_pages(struct page *page, unsigned int order); > + > #else /* CONFIG_KASAN_HW_TAGS */ > > static inline bool kasan_enabled(void) > { > - return true; > + return IS_ENABLED(CONFIG_KASAN); > } > > static inline bool kasan_has_integrated_init(void) > @@ -113,8 +109,30 @@ static inline bool kasan_has_integrated_init(void) > return false; > } > > +static __always_inline void kasan_alloc_pages(struct page *page, > + unsigned int order, gfp_t flags) > +{ > + /* Only available for integrated init. */ > + BUILD_BUG(); > +} > + > +static __always_inline void kasan_free_pages(struct page *page, > + unsigned int order) > +{ > + /* Only available for integrated init. */ > + BUILD_BUG(); > +} > + > #endif /* CONFIG_KASAN_HW_TAGS */ > > +#ifdef CONFIG_KASAN > + > +struct kasan_cache { > + int alloc_meta_offset; > + int free_meta_offset; > + bool is_kmalloc; > +}; > + > slab_flags_t __kasan_never_merge(void); > static __always_inline slab_flags_t kasan_never_merge(void) > { > @@ -130,20 +148,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) > __kasan_unpoison_range(addr, size); > } > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); > -static __always_inline void kasan_alloc_pages(struct page *page, > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); > +static __always_inline void kasan_poison_pages(struct page *page, > unsigned int order, bool init) > { > if (kasan_enabled()) > - __kasan_alloc_pages(page, order, init); > + __kasan_poison_pages(page, order, init); > } > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init); > -static __always_inline void kasan_free_pages(struct page *page, > - unsigned int order, bool init) > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > +static __always_inline void kasan_unpoison_pages(struct page *page, > + unsigned int order, bool init) > { > if (kasan_enabled()) > - __kasan_free_pages(page, order, init); > + __kasan_unpoison_pages(page, order, init); > } > > void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > @@ -285,21 +303,15 @@ void kasan_restore_multi_shot(bool enabled); > > #else /* CONFIG_KASAN */ > > -static inline bool kasan_enabled(void) > -{ > - return false; > -} > -static inline bool kasan_has_integrated_init(void) > -{ > - return false; > -} > static inline slab_flags_t kasan_never_merge(void) > { > return 0; > } > static inline void kasan_unpoison_range(const void *address, size_t size) {} > -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} > -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} > +static inline void kasan_poison_pages(struct page *page, unsigned int order, > + bool init) {} > +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, > + bool init) {} > static inline void kasan_cache_create(struct kmem_cache *cache, > unsigned int *size, > slab_flags_t *flags) {} > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > index 6bb87f2acd4e..0ecd293af344 100644 > --- a/mm/kasan/common.c > +++ b/mm/kasan/common.c > @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) > return 0; > } > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > { > u8 tag; > unsigned long i; > @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > kasan_unpoison(page_address(page), PAGE_SIZE << order, init); > } > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init) > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) > { > if (likely(!PageHighMem(page))) > kasan_poison(page_address(page), PAGE_SIZE << order, > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c > index 4004388b4e4b..9d0f6f934016 100644 > --- a/mm/kasan/hw_tags.c > +++ b/mm/kasan/hw_tags.c > @@ -238,6 +238,28 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, > return &alloc_meta->free_track[0]; > } > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) > +{ > + /* > + * This condition should match the one in post_alloc_hook() in > + * page_alloc.c. > + */ > + bool init = !want_init_on_free() && want_init_on_alloc(flags); Now we have a comment here ... > + > + kasan_unpoison_pages(page, order, init); > +} > + > +void kasan_free_pages(struct page *page, unsigned int order) > +{ > + /* > + * This condition should match the one in free_pages_prepare() in > + * page_alloc.c. > + */ > + bool init = want_init_on_free(); and here, ... > + > + kasan_poison_pages(page, order, init); > +} > + > #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) > > void kasan_set_tagging_report_once(bool state) > diff --git a/mm/mempool.c b/mm/mempool.c > index a258cf4de575..0b8afbec3e35 100644 > --- a/mm/mempool.c > +++ b/mm/mempool.c > @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > kasan_slab_free_mempool(element); > else if (pool->alloc == mempool_alloc_pages) > - kasan_free_pages(element, (unsigned long)pool->pool_data, false); > + kasan_poison_pages(element, (unsigned long)pool->pool_data, > + false); > } > > static void kasan_unpoison_element(mempool_t *pool, void *element) > @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > kasan_unpoison_range(element, __ksize(element)); > else if (pool->alloc == mempool_alloc_pages) > - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); > + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > + false); > } > > static __always_inline void add_element(mempool_t *pool, void *element) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index aaa1655cf682..4fddb7cac3c6 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; > static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > /* > - * Calling kasan_free_pages() only after deferred memory initialization > + * Calling kasan_poison_pages() only after deferred memory initialization > * has completed. Poisoning pages during deferred memory init will greatly > * lengthen the process and cause problem in large memory systems as the > * deferred pages initialization is done with interrupt disabled. > @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); > * on-demand allocation and then freed again before the deferred pages > * initialization is done, but this is not likely to happen. > */ > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > - bool init, fpi_t fpi_flags) > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > { > - if (static_branch_unlikely(&deferred_pages)) > - return; > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > - return; > - kasan_free_pages(page, order, init); > + return static_branch_unlikely(&deferred_pages) || > + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > } > > /* Returns true if the struct page for the pfn is uninitialised */ > @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > return false; > } > #else > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > - bool init, fpi_t fpi_flags) > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > { > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > - return; > - kasan_free_pages(page, order, init); > + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > } > > static inline bool early_page_uninitialised(unsigned long pfn) > @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > unsigned int order, bool check_free, fpi_t fpi_flags) > { > int bad = 0; > - bool init; > + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); > > VM_BUG_ON_PAGE(PageTail(page), page); > > @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, > * With hardware tag-based KASAN, memory tags must be set before the > * page becomes unavailable via debug_pagealloc or arch_free_page. > */ > - init = want_init_on_free(); > - if (init && !kasan_has_integrated_init()) > - kernel_init_free_pages(page, 1 << order); > - kasan_free_nondeferred_pages(page, order, init, fpi_flags); > + if (kasan_has_integrated_init()) { > + if (!skip_kasan_poison) > + kasan_free_pages(page, order); > + } else { > + bool init = want_init_on_free(); ... but not here ... > + > + if (init) > + kernel_init_free_pages(page, 1 << order); > + if (!skip_kasan_poison) > + kasan_poison_pages(page, order, init); > + } > > /* > * arch_free_page() can make the page's contents inaccessible. s390 > @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) > inline void post_alloc_hook(struct page *page, unsigned int order, > gfp_t gfp_flags) > { > - bool init; > - > set_page_private(page, 0); > set_page_refcounted(page); > > @@ -2344,10 +2342,15 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > * kasan_alloc_pages and kernel_init_free_pages must be > * kept together to avoid discrepancies in behavior. > */ > - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); > - kasan_alloc_pages(page, order, init); > - if (init && !kasan_has_integrated_init()) > - kernel_init_free_pages(page, 1 << order); > + if (kasan_has_integrated_init()) { > + kasan_alloc_pages(page, order, gfp_flags); > + } else { > + bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); ... or here. So if someone updates one of these conditions, they might forget the ones in KASAN code. Is there a strong reason not to use a macro or static inline helper? If not, let's do that. > + > + kasan_unpoison_pages(page, order, init); > + if (init) > + kernel_init_free_pages(page, 1 << order); > + } > > set_page_owner(page, order, gfp_flags); > } > -- > 2.32.0.rc0.204.g9fa02ecfa5-goog >