From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A7E5C4708F for ; Wed, 2 Jun 2021 12:20:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BB58B613B8 for ; Wed, 2 Jun 2021 12:20:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BB58B613B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5300A6B006C; Wed, 2 Jun 2021 08:20:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E02B6B006E; Wed, 2 Jun 2021 08:20:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 30C2D6B0070; Wed, 2 Jun 2021 08:20:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id E80766B006C for ; Wed, 2 Jun 2021 08:20:07 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 840FF824999B for ; Wed, 2 Jun 2021 12:20:07 +0000 (UTC) X-FDA: 78208690854.23.FA8F240 Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) by imf22.hostedemail.com (Postfix) with ESMTP id 250D7C00CBE7 for ; Wed, 2 Jun 2021 12:19:54 +0000 (UTC) Received: by mail-ej1-f54.google.com with SMTP id l1so3541157ejb.6 for ; Wed, 02 Jun 2021 05:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=G/5BTeMDqhwOtjHqQNnOBqAHNCAK6a6zoHAv42olYqk=; b=YxHUPZJDS4sDX+M08uPOKOZqp19cIhlgh68yMMEBYzJDjvgNsuqJqHLwoWgmqXqgAX ORTSUy02tZVnLsVJ9xB/ilPJwNybM2vuJWjJtWmp00sa/Gmbg6EjsLTjErepVtkeQzPm UDRz3ghL25hPuvcQhAHRM53894jlC33iOYuGhwZHQUK3Pey+rlWJo20mvsxIgfBSZ6bC g6FHGzuzVxIACfkyj4Eq/KeDXlJN/alFn0bvuPnXH4qLNJY7KgzP2bv+K+UFf/MYf94X pnjMvKfl5AT4APLREAcPuCnzjqRWGJxlhedTOyw/57AZ2Emj3gtTOFEToViGqbYJJRSy ydIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=G/5BTeMDqhwOtjHqQNnOBqAHNCAK6a6zoHAv42olYqk=; b=pJeMHhROte5pKvt61iInV02skV6HEeG/Yd8RITcHjrriqJl+xLMiOxe8yHxsHZsTlH n+OYQcfvnLJJ7CrI9yxh2e3QYzlpSo6QQMjoaAckKQdfwp0mTryHRtAUtEAZTuJxFZ4w bSqCJJaq/88iPZzvlBNIQVeeWESbJr7LoKcBUrXBsW1mXUHajQXWUS1nsBaZZ6h2khdw aSqO1iyfPDPxtfsVotNv7xzqO6hQkNqY5sxpSaznbViNUm38XGfNfllMQd4GGm1H/8pL ArGzHzg4uoLL/Hfzg8W8RZwyxzvGlYVl5bdashhB2vaj/mYFeykmYsF7/xuKwS6Pdpeq jXwg== X-Gm-Message-State: AOAM533A3JnENQ3r8ag4NQuLopFqHxynic4PsX+PgWiHCBkMkZgEAX80 GAlR0qFOOZMeN3iZsab3D4dVubFg7+4uk7YWvmc= X-Google-Smtp-Source: ABdhPJyLVvVh5rxQ+jCxAV3Zmxp1NUHV1MsIAF+3hcnlzu1juFLYPAtlKow626yg/1ujeJcbnAKiGSNdeM36DJYrcjo= X-Received: by 2002:a17:906:a945:: with SMTP id hh5mr33907105ejb.227.1622636405795; Wed, 02 Jun 2021 05:20:05 -0700 (PDT) MIME-Version: 1.0 References: <20210528010415.1852012-1-pcc@google.com> <20210528010415.1852012-3-pcc@google.com> In-Reply-To: From: Andrey Konovalov Date: Wed, 2 Jun 2021 15:19:54 +0300 Message-ID: Subject: Re: [PATCH v4 2/4] kasan: use separate (un)poison implementation for integrated init To: Peter Collingbourne Cc: Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn , Evgenii Stepanov , Linux Memory Management List , Linux ARM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 250D7C00CBE7 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=YxHUPZJD; spf=pass (imf22.hostedemail.com: domain of andreyknvl@gmail.com designates 209.85.218.54 as permitted sender) smtp.mailfrom=andreyknvl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam04 X-Stat-Signature: tsycyt6b5cao115wosrs34r3ozp61n6u X-HE-Tag: 1622636394-264737 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 1, 2021 at 10:29 PM Peter Collingbourne wrote: > > On Fri, May 28, 2021 at 3:25 AM Andrey Konovalov wrote: > > > > On Fri, May 28, 2021 at 4:04 AM Peter Collingbourne wrote: > > > > > > Currently with integrated init page_alloc.c needs to know whether > > > kasan_alloc_pages() will zero initialize memory, but this will start > > > becoming more complicated once we start adding tag initialization > > > support for user pages. To avoid page_alloc.c needing to know more > > > details of what integrated init will do, move the unpoisoning logic > > > for integrated init into the HW tags implementation. Currently the > > > logic is identical but it will diverge in subsequent patches. > > > > > > For symmetry do the same for poisoning although this logic will > > > be unaffected by subsequent patches. > > > > > > Signed-off-by: Peter Collingbourne > > > Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 > > > --- > > > v4: > > > - use IS_ENABLED(CONFIG_KASAN) > > > - add comments to kasan_alloc_pages and kasan_free_pages > > > - remove line break > > > > > > v3: > > > - use BUILD_BUG() > > > > > > v2: > > > - fix build with KASAN disabled > > > > > > include/linux/kasan.h | 64 +++++++++++++++++++++++++------------------ > > > mm/kasan/common.c | 4 +-- > > > mm/kasan/hw_tags.c | 22 +++++++++++++++ > > > mm/mempool.c | 6 ++-- > > > mm/page_alloc.c | 55 +++++++++++++++++++------------------ > > > 5 files changed, 95 insertions(+), 56 deletions(-) > > > > > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > > > index b1678a61e6a7..a1c7ce5f3e4f 100644 > > > --- a/include/linux/kasan.h > > > +++ b/include/linux/kasan.h > > > @@ -2,6 +2,7 @@ > > > #ifndef _LINUX_KASAN_H > > > #define _LINUX_KASAN_H > > > > > > +#include > > > #include > > > #include > > > > > > @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} > > > > > > #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ > > > > > > -#ifdef CONFIG_KASAN > > > - > > > -struct kasan_cache { > > > - int alloc_meta_offset; > > > - int free_meta_offset; > > > - bool is_kmalloc; > > > -}; > > > - > > > #ifdef CONFIG_KASAN_HW_TAGS > > > > > > DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); > > > @@ -101,11 +94,14 @@ static inline bool kasan_has_integrated_init(void) > > > return kasan_enabled(); > > > } > > > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); > > > +void kasan_free_pages(struct page *page, unsigned int order); > > > + > > > #else /* CONFIG_KASAN_HW_TAGS */ > > > > > > static inline bool kasan_enabled(void) > > > { > > > - return true; > > > + return IS_ENABLED(CONFIG_KASAN); > > > } > > > > > > static inline bool kasan_has_integrated_init(void) > > > @@ -113,8 +109,30 @@ static inline bool kasan_has_integrated_init(void) > > > return false; > > > } > > > > > > +static __always_inline void kasan_alloc_pages(struct page *page, > > > + unsigned int order, gfp_t flags) > > > +{ > > > + /* Only available for integrated init. */ > > > + BUILD_BUG(); > > > +} > > > + > > > +static __always_inline void kasan_free_pages(struct page *page, > > > + unsigned int order) > > > +{ > > > + /* Only available for integrated init. */ > > > + BUILD_BUG(); > > > +} > > > + > > > #endif /* CONFIG_KASAN_HW_TAGS */ > > > > > > +#ifdef CONFIG_KASAN > > > + > > > +struct kasan_cache { > > > + int alloc_meta_offset; > > > + int free_meta_offset; > > > + bool is_kmalloc; > > > +}; > > > + > > > slab_flags_t __kasan_never_merge(void); > > > static __always_inline slab_flags_t kasan_never_merge(void) > > > { > > > @@ -130,20 +148,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) > > > __kasan_unpoison_range(addr, size); > > > } > > > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); > > > -static __always_inline void kasan_alloc_pages(struct page *page, > > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); > > > +static __always_inline void kasan_poison_pages(struct page *page, > > > unsigned int order, bool init) > > > { > > > if (kasan_enabled()) > > > - __kasan_alloc_pages(page, order, init); > > > + __kasan_poison_pages(page, order, init); > > > } > > > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init); > > > -static __always_inline void kasan_free_pages(struct page *page, > > > - unsigned int order, bool init) > > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > > > +static __always_inline void kasan_unpoison_pages(struct page *page, > > > + unsigned int order, bool init) > > > { > > > if (kasan_enabled()) > > > - __kasan_free_pages(page, order, init); > > > + __kasan_unpoison_pages(page, order, init); > > > } > > > > > > void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > > > @@ -285,21 +303,15 @@ void kasan_restore_multi_shot(bool enabled); > > > > > > #else /* CONFIG_KASAN */ > > > > > > -static inline bool kasan_enabled(void) > > > -{ > > > - return false; > > > -} > > > -static inline bool kasan_has_integrated_init(void) > > > -{ > > > - return false; > > > -} > > > static inline slab_flags_t kasan_never_merge(void) > > > { > > > return 0; > > > } > > > static inline void kasan_unpoison_range(const void *address, size_t size) {} > > > -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} > > > -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} > > > +static inline void kasan_poison_pages(struct page *page, unsigned int order, > > > + bool init) {} > > > +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, > > > + bool init) {} > > > static inline void kasan_cache_create(struct kmem_cache *cache, > > > unsigned int *size, > > > slab_flags_t *flags) {} > > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > > > index 6bb87f2acd4e..0ecd293af344 100644 > > > --- a/mm/kasan/common.c > > > +++ b/mm/kasan/common.c > > > @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) > > > return 0; > > > } > > > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > > > { > > > u8 tag; > > > unsigned long i; > > > @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > > kasan_unpoison(page_address(page), PAGE_SIZE << order, init); > > > } > > > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init) > > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) > > > { > > > if (likely(!PageHighMem(page))) > > > kasan_poison(page_address(page), PAGE_SIZE << order, > > > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c > > > index 4004388b4e4b..9d0f6f934016 100644 > > > --- a/mm/kasan/hw_tags.c > > > +++ b/mm/kasan/hw_tags.c > > > @@ -238,6 +238,28 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, > > > return &alloc_meta->free_track[0]; > > > } > > > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) > > > +{ > > > + /* > > > + * This condition should match the one in post_alloc_hook() in > > > + * page_alloc.c. > > > + */ > > > + bool init = !want_init_on_free() && want_init_on_alloc(flags); > > > > Now we have a comment here ... > > > > > + > > > + kasan_unpoison_pages(page, order, init); > > > +} > > > + > > > +void kasan_free_pages(struct page *page, unsigned int order) > > > +{ > > > + /* > > > + * This condition should match the one in free_pages_prepare() in > > > + * page_alloc.c. > > > + */ > > > + bool init = want_init_on_free(); > > > > and here, ... > > > > > + > > > + kasan_poison_pages(page, order, init); > > > +} > > > + > > > #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) > > > > > > void kasan_set_tagging_report_once(bool state) > > > diff --git a/mm/mempool.c b/mm/mempool.c > > > index a258cf4de575..0b8afbec3e35 100644 > > > --- a/mm/mempool.c > > > +++ b/mm/mempool.c > > > @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > > kasan_slab_free_mempool(element); > > > else if (pool->alloc == mempool_alloc_pages) > > > - kasan_free_pages(element, (unsigned long)pool->pool_data, false); > > > + kasan_poison_pages(element, (unsigned long)pool->pool_data, > > > + false); > > > } > > > > > > static void kasan_unpoison_element(mempool_t *pool, void *element) > > > @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) > > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > > kasan_unpoison_range(element, __ksize(element)); > > > else if (pool->alloc == mempool_alloc_pages) > > > - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); > > > + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > > > + false); > > > } > > > > > > static __always_inline void add_element(mempool_t *pool, void *element) > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index aaa1655cf682..4fddb7cac3c6 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; > > > static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > > > > > /* > > > - * Calling kasan_free_pages() only after deferred memory initialization > > > + * Calling kasan_poison_pages() only after deferred memory initialization > > > * has completed. Poisoning pages during deferred memory init will greatly > > > * lengthen the process and cause problem in large memory systems as the > > > * deferred pages initialization is done with interrupt disabled. > > > @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > > * on-demand allocation and then freed again before the deferred pages > > > * initialization is done, but this is not likely to happen. > > > */ > > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > > - bool init, fpi_t fpi_flags) > > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > > { > > > - if (static_branch_unlikely(&deferred_pages)) > > > - return; > > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > > - return; > > > - kasan_free_pages(page, order, init); > > > + return static_branch_unlikely(&deferred_pages) || > > > + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > > } > > > > > > /* Returns true if the struct page for the pfn is uninitialised */ > > > @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > > > return false; > > > } > > > #else > > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > > - bool init, fpi_t fpi_flags) > > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > > { > > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > > - return; > > > - kasan_free_pages(page, order, init); > > > + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > > } > > > > > > static inline bool early_page_uninitialised(unsigned long pfn) > > > @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > > > unsigned int order, bool check_free, fpi_t fpi_flags) > > > { > > > int bad = 0; > > > - bool init; > > > + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); > > > > > > VM_BUG_ON_PAGE(PageTail(page), page); > > > > > > @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, > > > * With hardware tag-based KASAN, memory tags must be set before the > > > * page becomes unavailable via debug_pagealloc or arch_free_page. > > > */ > > > - init = want_init_on_free(); > > > - if (init && !kasan_has_integrated_init()) > > > - kernel_init_free_pages(page, 1 << order); > > > - kasan_free_nondeferred_pages(page, order, init, fpi_flags); > > > + if (kasan_has_integrated_init()) { > > > + if (!skip_kasan_poison) > > > + kasan_free_pages(page, order); > > > + } else { > > > + bool init = want_init_on_free(); > > > > ... but not here ... > > > > > + > > > + if (init) > > > + kernel_init_free_pages(page, 1 << order); > > > + if (!skip_kasan_poison) > > > + kasan_poison_pages(page, order, init); > > > + } > > > > > > /* > > > * arch_free_page() can make the page's contents inaccessible. s390 > > > @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) > > > inline void post_alloc_hook(struct page *page, unsigned int order, > > > gfp_t gfp_flags) > > > { > > > - bool init; > > > - > > > set_page_private(page, 0); > > > set_page_refcounted(page); > > > > > > @@ -2344,10 +2342,15 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > > > * kasan_alloc_pages and kernel_init_free_pages must be > > > * kept together to avoid discrepancies in behavior. > > > */ > > > - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); > > > - kasan_alloc_pages(page, order, init); > > > - if (init && !kasan_has_integrated_init()) > > > - kernel_init_free_pages(page, 1 << order); > > > + if (kasan_has_integrated_init()) { > > > + kasan_alloc_pages(page, order, gfp_flags); > > > + } else { > > > + bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); > > > > ... or here. > > > > So if someone updates one of these conditions, they might forget the > > ones in KASAN code. > > > > Is there a strong reason not to use a macro or static inline helper? > > If not, let's do that. > > I'm not sure that it will accomplish much. It isn't much code after > all and it means that we are adding another level of indirection which > readers will need to look through in order to understand what is going > on. > > We already have this comment in free_pages_prepare: > > /* > * As memory initialization might be integrated into KASAN, > * kasan_free_pages and kernel_init_free_pages must be > * kept together to avoid discrepancies in behavior. > * > * With hardware tag-based KASAN, memory tags must be set before the > * page becomes unavailable via debug_pagealloc or arch_free_page. > */ > > and this one in post_alloc_hook: > > /* > * As memory initialization might be integrated into KASAN, > * kasan_alloc_pages and kernel_init_free_pages must be > * kept together to avoid discrepancies in behavior. > */ > > Is that not enough? Ah, forgot about those two. Alright, let's keep this version with comments then. From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.7 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 993DBC4708F for ; Wed, 2 Jun 2021 12:36:39 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 53B266100B for ; Wed, 2 Jun 2021 12:36:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53B266100B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Zi7ZLAz0S522zhNNAHyLQ2Yy1W+RjCmPnFOjobKO5BQ=; b=BdsOTUTIHuf8ku kf5R2QasHLu52Its8ZYoYx0T8xcWwzNxosznw437W/LIwZcTC5IYrEv4nc/aGzm4abwe2nAXgi1Xf oxcYR9QWgSPhZQ33jTc0Ap4sx6D2jC26K3+BI3CQNbqvyniP2gAV/eWy+K1DSbL88VtKz6JOAnerq oo44h21z1MyC2Jv+kNw/gKUKw0B/jvlZ8o2M4hNNke36YDFeCMVpH/NrsT/jKc/mj9TbmeMWrnqLE Hn4W8slHX/F50QLr53m/2K8EeSoDRtcF6MhfpmuqtOzcB05WpSbekh7odNiLzaYBjiiWXuhMrMIOk dWG42TOwItdnDoTBJbIQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1loQ4Q-0046BC-Eu; Wed, 02 Jun 2021 12:34:04 +0000 Received: from mail-ej1-x62d.google.com ([2a00:1450:4864:20::62d]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1loPqz-00406C-22 for linux-arm-kernel@lists.infradead.org; Wed, 02 Jun 2021 12:20:13 +0000 Received: by mail-ej1-x62d.google.com with SMTP id g20so3633807ejt.0 for ; Wed, 02 Jun 2021 05:20:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=G/5BTeMDqhwOtjHqQNnOBqAHNCAK6a6zoHAv42olYqk=; b=YxHUPZJDS4sDX+M08uPOKOZqp19cIhlgh68yMMEBYzJDjvgNsuqJqHLwoWgmqXqgAX ORTSUy02tZVnLsVJ9xB/ilPJwNybM2vuJWjJtWmp00sa/Gmbg6EjsLTjErepVtkeQzPm UDRz3ghL25hPuvcQhAHRM53894jlC33iOYuGhwZHQUK3Pey+rlWJo20mvsxIgfBSZ6bC g6FHGzuzVxIACfkyj4Eq/KeDXlJN/alFn0bvuPnXH4qLNJY7KgzP2bv+K+UFf/MYf94X pnjMvKfl5AT4APLREAcPuCnzjqRWGJxlhedTOyw/57AZ2Emj3gtTOFEToViGqbYJJRSy ydIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=G/5BTeMDqhwOtjHqQNnOBqAHNCAK6a6zoHAv42olYqk=; b=XjBZpIIRn05sP+E6mvLhpmSmAnuhF3tnoIC9bUHjeEwNkY3vLtB41CA79jX5k9/fz9 g4j3hg3/jUeK9uXJDF6x0wxwQoCf6dxh3R/BjzRhW7udFrByYOXl1C8miFVGaruV+1kw Mm3uPP68eVz/hOTCGKl8QX1DQ7nQccG9yy50XNSvYyN1HUk1Hs/r+DReodj5Gt2eDJso aX3M4r/F9LJ4YTA47pcCcm5qzC7uAckA9Cjl9EPJmTdJL3B5+3u20R0LsmD02kV5dNiv M6w44DcaTOgIaURrWhCUCodAa+zRgkk5TPrjIi+P3YnPSg3/oj+eAhDm2O0gU5tK4LhS dvWg== X-Gm-Message-State: AOAM530Jm0DhPAeXmTzopDMFn1AWUo0In4hWV9kyvtYcZIt3oWs0zjq7 R3Ss48R1l8Bq220O4Eb1/Zr+Y08dEKYZDaICiB0= X-Google-Smtp-Source: ABdhPJyLVvVh5rxQ+jCxAV3Zmxp1NUHV1MsIAF+3hcnlzu1juFLYPAtlKow626yg/1ujeJcbnAKiGSNdeM36DJYrcjo= X-Received: by 2002:a17:906:a945:: with SMTP id hh5mr33907105ejb.227.1622636405795; Wed, 02 Jun 2021 05:20:05 -0700 (PDT) MIME-Version: 1.0 References: <20210528010415.1852012-1-pcc@google.com> <20210528010415.1852012-3-pcc@google.com> In-Reply-To: From: Andrey Konovalov Date: Wed, 2 Jun 2021 15:19:54 +0300 Message-ID: Subject: Re: [PATCH v4 2/4] kasan: use separate (un)poison implementation for integrated init To: Peter Collingbourne Cc: Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Jann Horn , Evgenii Stepanov , Linux Memory Management List , Linux ARM X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210602_052009_167418_8F92129C X-CRM114-Status: GOOD ( 36.92 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Jun 1, 2021 at 10:29 PM Peter Collingbourne wrote: > > On Fri, May 28, 2021 at 3:25 AM Andrey Konovalov wrote: > > > > On Fri, May 28, 2021 at 4:04 AM Peter Collingbourne wrote: > > > > > > Currently with integrated init page_alloc.c needs to know whether > > > kasan_alloc_pages() will zero initialize memory, but this will start > > > becoming more complicated once we start adding tag initialization > > > support for user pages. To avoid page_alloc.c needing to know more > > > details of what integrated init will do, move the unpoisoning logic > > > for integrated init into the HW tags implementation. Currently the > > > logic is identical but it will diverge in subsequent patches. > > > > > > For symmetry do the same for poisoning although this logic will > > > be unaffected by subsequent patches. > > > > > > Signed-off-by: Peter Collingbourne > > > Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 > > > --- > > > v4: > > > - use IS_ENABLED(CONFIG_KASAN) > > > - add comments to kasan_alloc_pages and kasan_free_pages > > > - remove line break > > > > > > v3: > > > - use BUILD_BUG() > > > > > > v2: > > > - fix build with KASAN disabled > > > > > > include/linux/kasan.h | 64 +++++++++++++++++++++++++------------------ > > > mm/kasan/common.c | 4 +-- > > > mm/kasan/hw_tags.c | 22 +++++++++++++++ > > > mm/mempool.c | 6 ++-- > > > mm/page_alloc.c | 55 +++++++++++++++++++------------------ > > > 5 files changed, 95 insertions(+), 56 deletions(-) > > > > > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > > > index b1678a61e6a7..a1c7ce5f3e4f 100644 > > > --- a/include/linux/kasan.h > > > +++ b/include/linux/kasan.h > > > @@ -2,6 +2,7 @@ > > > #ifndef _LINUX_KASAN_H > > > #define _LINUX_KASAN_H > > > > > > +#include > > > #include > > > #include > > > > > > @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} > > > > > > #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ > > > > > > -#ifdef CONFIG_KASAN > > > - > > > -struct kasan_cache { > > > - int alloc_meta_offset; > > > - int free_meta_offset; > > > - bool is_kmalloc; > > > -}; > > > - > > > #ifdef CONFIG_KASAN_HW_TAGS > > > > > > DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); > > > @@ -101,11 +94,14 @@ static inline bool kasan_has_integrated_init(void) > > > return kasan_enabled(); > > > } > > > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); > > > +void kasan_free_pages(struct page *page, unsigned int order); > > > + > > > #else /* CONFIG_KASAN_HW_TAGS */ > > > > > > static inline bool kasan_enabled(void) > > > { > > > - return true; > > > + return IS_ENABLED(CONFIG_KASAN); > > > } > > > > > > static inline bool kasan_has_integrated_init(void) > > > @@ -113,8 +109,30 @@ static inline bool kasan_has_integrated_init(void) > > > return false; > > > } > > > > > > +static __always_inline void kasan_alloc_pages(struct page *page, > > > + unsigned int order, gfp_t flags) > > > +{ > > > + /* Only available for integrated init. */ > > > + BUILD_BUG(); > > > +} > > > + > > > +static __always_inline void kasan_free_pages(struct page *page, > > > + unsigned int order) > > > +{ > > > + /* Only available for integrated init. */ > > > + BUILD_BUG(); > > > +} > > > + > > > #endif /* CONFIG_KASAN_HW_TAGS */ > > > > > > +#ifdef CONFIG_KASAN > > > + > > > +struct kasan_cache { > > > + int alloc_meta_offset; > > > + int free_meta_offset; > > > + bool is_kmalloc; > > > +}; > > > + > > > slab_flags_t __kasan_never_merge(void); > > > static __always_inline slab_flags_t kasan_never_merge(void) > > > { > > > @@ -130,20 +148,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) > > > __kasan_unpoison_range(addr, size); > > > } > > > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); > > > -static __always_inline void kasan_alloc_pages(struct page *page, > > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); > > > +static __always_inline void kasan_poison_pages(struct page *page, > > > unsigned int order, bool init) > > > { > > > if (kasan_enabled()) > > > - __kasan_alloc_pages(page, order, init); > > > + __kasan_poison_pages(page, order, init); > > > } > > > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init); > > > -static __always_inline void kasan_free_pages(struct page *page, > > > - unsigned int order, bool init) > > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > > > +static __always_inline void kasan_unpoison_pages(struct page *page, > > > + unsigned int order, bool init) > > > { > > > if (kasan_enabled()) > > > - __kasan_free_pages(page, order, init); > > > + __kasan_unpoison_pages(page, order, init); > > > } > > > > > > void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > > > @@ -285,21 +303,15 @@ void kasan_restore_multi_shot(bool enabled); > > > > > > #else /* CONFIG_KASAN */ > > > > > > -static inline bool kasan_enabled(void) > > > -{ > > > - return false; > > > -} > > > -static inline bool kasan_has_integrated_init(void) > > > -{ > > > - return false; > > > -} > > > static inline slab_flags_t kasan_never_merge(void) > > > { > > > return 0; > > > } > > > static inline void kasan_unpoison_range(const void *address, size_t size) {} > > > -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} > > > -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} > > > +static inline void kasan_poison_pages(struct page *page, unsigned int order, > > > + bool init) {} > > > +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, > > > + bool init) {} > > > static inline void kasan_cache_create(struct kmem_cache *cache, > > > unsigned int *size, > > > slab_flags_t *flags) {} > > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > > > index 6bb87f2acd4e..0ecd293af344 100644 > > > --- a/mm/kasan/common.c > > > +++ b/mm/kasan/common.c > > > @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) > > > return 0; > > > } > > > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > > > { > > > u8 tag; > > > unsigned long i; > > > @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > > kasan_unpoison(page_address(page), PAGE_SIZE << order, init); > > > } > > > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init) > > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) > > > { > > > if (likely(!PageHighMem(page))) > > > kasan_poison(page_address(page), PAGE_SIZE << order, > > > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c > > > index 4004388b4e4b..9d0f6f934016 100644 > > > --- a/mm/kasan/hw_tags.c > > > +++ b/mm/kasan/hw_tags.c > > > @@ -238,6 +238,28 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, > > > return &alloc_meta->free_track[0]; > > > } > > > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) > > > +{ > > > + /* > > > + * This condition should match the one in post_alloc_hook() in > > > + * page_alloc.c. > > > + */ > > > + bool init = !want_init_on_free() && want_init_on_alloc(flags); > > > > Now we have a comment here ... > > > > > + > > > + kasan_unpoison_pages(page, order, init); > > > +} > > > + > > > +void kasan_free_pages(struct page *page, unsigned int order) > > > +{ > > > + /* > > > + * This condition should match the one in free_pages_prepare() in > > > + * page_alloc.c. > > > + */ > > > + bool init = want_init_on_free(); > > > > and here, ... > > > > > + > > > + kasan_poison_pages(page, order, init); > > > +} > > > + > > > #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) > > > > > > void kasan_set_tagging_report_once(bool state) > > > diff --git a/mm/mempool.c b/mm/mempool.c > > > index a258cf4de575..0b8afbec3e35 100644 > > > --- a/mm/mempool.c > > > +++ b/mm/mempool.c > > > @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > > kasan_slab_free_mempool(element); > > > else if (pool->alloc == mempool_alloc_pages) > > > - kasan_free_pages(element, (unsigned long)pool->pool_data, false); > > > + kasan_poison_pages(element, (unsigned long)pool->pool_data, > > > + false); > > > } > > > > > > static void kasan_unpoison_element(mempool_t *pool, void *element) > > > @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) > > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > > kasan_unpoison_range(element, __ksize(element)); > > > else if (pool->alloc == mempool_alloc_pages) > > > - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); > > > + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > > > + false); > > > } > > > > > > static __always_inline void add_element(mempool_t *pool, void *element) > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index aaa1655cf682..4fddb7cac3c6 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; > > > static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > > > > > /* > > > - * Calling kasan_free_pages() only after deferred memory initialization > > > + * Calling kasan_poison_pages() only after deferred memory initialization > > > * has completed. Poisoning pages during deferred memory init will greatly > > > * lengthen the process and cause problem in large memory systems as the > > > * deferred pages initialization is done with interrupt disabled. > > > @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > > * on-demand allocation and then freed again before the deferred pages > > > * initialization is done, but this is not likely to happen. > > > */ > > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > > - bool init, fpi_t fpi_flags) > > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > > { > > > - if (static_branch_unlikely(&deferred_pages)) > > > - return; > > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > > - return; > > > - kasan_free_pages(page, order, init); > > > + return static_branch_unlikely(&deferred_pages) || > > > + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > > } > > > > > > /* Returns true if the struct page for the pfn is uninitialised */ > > > @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > > > return false; > > > } > > > #else > > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > > - bool init, fpi_t fpi_flags) > > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > > { > > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > > - return; > > > - kasan_free_pages(page, order, init); > > > + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > > } > > > > > > static inline bool early_page_uninitialised(unsigned long pfn) > > > @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > > > unsigned int order, bool check_free, fpi_t fpi_flags) > > > { > > > int bad = 0; > > > - bool init; > > > + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); > > > > > > VM_BUG_ON_PAGE(PageTail(page), page); > > > > > > @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, > > > * With hardware tag-based KASAN, memory tags must be set before the > > > * page becomes unavailable via debug_pagealloc or arch_free_page. > > > */ > > > - init = want_init_on_free(); > > > - if (init && !kasan_has_integrated_init()) > > > - kernel_init_free_pages(page, 1 << order); > > > - kasan_free_nondeferred_pages(page, order, init, fpi_flags); > > > + if (kasan_has_integrated_init()) { > > > + if (!skip_kasan_poison) > > > + kasan_free_pages(page, order); > > > + } else { > > > + bool init = want_init_on_free(); > > > > ... but not here ... > > > > > + > > > + if (init) > > > + kernel_init_free_pages(page, 1 << order); > > > + if (!skip_kasan_poison) > > > + kasan_poison_pages(page, order, init); > > > + } > > > > > > /* > > > * arch_free_page() can make the page's contents inaccessible. s390 > > > @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) > > > inline void post_alloc_hook(struct page *page, unsigned int order, > > > gfp_t gfp_flags) > > > { > > > - bool init; > > > - > > > set_page_private(page, 0); > > > set_page_refcounted(page); > > > > > > @@ -2344,10 +2342,15 @@ inline void post_alloc_hook(struct page *page, unsigned int order, > > > * kasan_alloc_pages and kernel_init_free_pages must be > > > * kept together to avoid discrepancies in behavior. > > > */ > > > - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); > > > - kasan_alloc_pages(page, order, init); > > > - if (init && !kasan_has_integrated_init()) > > > - kernel_init_free_pages(page, 1 << order); > > > + if (kasan_has_integrated_init()) { > > > + kasan_alloc_pages(page, order, gfp_flags); > > > + } else { > > > + bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags); > > > > ... or here. > > > > So if someone updates one of these conditions, they might forget the > > ones in KASAN code. > > > > Is there a strong reason not to use a macro or static inline helper? > > If not, let's do that. > > I'm not sure that it will accomplish much. It isn't much code after > all and it means that we are adding another level of indirection which > readers will need to look through in order to understand what is going > on. > > We already have this comment in free_pages_prepare: > > /* > * As memory initialization might be integrated into KASAN, > * kasan_free_pages and kernel_init_free_pages must be > * kept together to avoid discrepancies in behavior. > * > * With hardware tag-based KASAN, memory tags must be set before the > * page becomes unavailable via debug_pagealloc or arch_free_page. > */ > > and this one in post_alloc_hook: > > /* > * As memory initialization might be integrated into KASAN, > * kasan_alloc_pages and kernel_init_free_pages must be > * kept together to avoid discrepancies in behavior. > */ > > Is that not enough? Ah, forgot about those two. Alright, let's keep this version with comments then. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel