From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-28.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BBCCC4707F for ; Fri, 28 May 2021 01:05:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9E8CF613AF for ; Fri, 28 May 2021 01:05:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9E8CF613AF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A5FB88E0001; Thu, 27 May 2021 21:05:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A1EEA940008; Thu, 27 May 2021 21:05:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BC11940007; Thu, 27 May 2021 21:05:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0220.hostedemail.com [216.40.44.220]) by kanga.kvack.org (Postfix) with ESMTP id 0A4EF8E0001 for ; Thu, 27 May 2021 21:05:10 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8B19A824999B for ; Fri, 28 May 2021 01:05:10 +0000 (UTC) X-FDA: 78188845980.33.0EFD910 Received: from mail-il1-f170.google.com (mail-il1-f170.google.com [209.85.166.170]) by imf09.hostedemail.com (Postfix) with ESMTP id 4FA346000ECA for ; Fri, 28 May 2021 01:05:05 +0000 (UTC) Received: by mail-il1-f170.google.com with SMTP id u3so1938308ilv.8 for ; Thu, 27 May 2021 18:05:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pH3mH2nU+sqmcTI+yb3QviZoI8E5U6NdZ/q+dcaDnvk=; b=WhJVCQAQWrNS81E9KKqaiMHOjanCIurTe55jseXSccrOWboALzRa5pPSVeQoZj/MPf ouDiSUE92BSd6zJa7I2koTd/CS2F69HXDFpPBrYbbVfyR+rlpp5bDR6AopG+BQzMBC04 v1AgMIo3Hb1NxaZaluEqsdRcu9LFvKw31qZhfkwa8TJIl2yxu0MYfagxa+sfBzs/JRtr AdPF0VNqputHlhyPwSisjJT636qnXx7181YhKz7uC284LNWWL9NSIqlkuIaxWuerPwGX pNyPS5soGIVnHoWv77AWVrJGSQStx2lehntZkz09DOfwzTv6iQZyBAq+GecERcqorIhe LtLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pH3mH2nU+sqmcTI+yb3QviZoI8E5U6NdZ/q+dcaDnvk=; b=CjuzU/ROO9Qc+zvcSWUyX35p76IiRLYSpMIMw/NyP36/3JQvEfFEoL1SNt0rnoAz2T MG6mvTHCWKDCdm2GebXt2yNi2cFwZDo8OCektF4rflFgxUVdlE4yRkpr9OZAm4BYz8m2 lOtFk3h3ppSA3xt3pFSh/0vDx5FPpz47CI14SyDMgB5a69mmBIzWw8wEjhD8VDXhsZRx JvW/KvlsNcCl/0UlhOLhK2QScQ91FUIPZuPrQEILJMy/C7vajzSHwpwfebXpeSv+Y9K8 AErT7uFzN7qZgeh6XroAFZAveTPjwMWm60dZF25W1su8rakLHjqPm5CkYdTbXvP+LtiW Nwqw== X-Gm-Message-State: AOAM533tvdvMkspU/Une01AUJfKtsY5IxkrmfIQytBsOd7jZd6higTjY /3fDqZUzpRmK743RSY2CEZki5X7oC1kGP4DpDNhqlw== X-Google-Smtp-Source: ABdhPJzKSXnMxwStSox56YewWlmJ+MykSjnNAS+xWqnqxMtMIoqZ0HneOB05KZmcZCrHG5CmY5yOqgR1jaF/I4n9ryg= X-Received: by 2002:a05:6e02:e10:: with SMTP id a16mr4915533ilk.56.1622163909308; Thu, 27 May 2021 18:05:09 -0700 (PDT) MIME-Version: 1.0 References: <78af73393175c648b4eb10312825612f6e6889f6.1620849613.git.pcc@google.com> In-Reply-To: From: Peter Collingbourne Date: Thu, 27 May 2021 18:04:57 -0700 Message-ID: Subject: Re: [PATCH v3 1/3] kasan: use separate (un)poison implementation for integrated init To: Andrey Konovalov Cc: Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton , Evgenii Stepanov , Linux Memory Management List , Linux ARM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 4FA346000ECA Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=WhJVCQAQ; spf=pass (imf09.hostedemail.com: domain of pcc@google.com designates 209.85.166.170 as permitted sender) smtp.mailfrom=pcc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Stat-Signature: 4p6drf6nn31szkdrjc8wdzs1ghot79r4 X-HE-Tag: 1622163905-198470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 25, 2021 at 3:00 PM Andrey Konovalov wrote: > > On Wed, May 12, 2021 at 11:09 PM Peter Collingbourne wrote: > > > > Currently with integrated init page_alloc.c needs to know whether > > kasan_alloc_pages() will zero initialize memory, but this will start > > becoming more complicated once we start adding tag initialization > > support for user pages. To avoid page_alloc.c needing to know more > > details of what integrated init will do, move the unpoisoning logic > > for integrated init into the HW tags implementation. Currently the > > logic is identical but it will diverge in subsequent patches. > > > > For symmetry do the same for poisoning although this logic will > > be unaffected by subsequent patches. > > > > Signed-off-by: Peter Collingbourne > > Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 > > --- > > v3: > > - use BUILD_BUG() > > > > v2: > > - fix build with KASAN disabled > > > > include/linux/kasan.h | 66 +++++++++++++++++++++++++++---------------- > > mm/kasan/common.c | 4 +-- > > mm/kasan/hw_tags.c | 14 +++++++++ > > mm/mempool.c | 6 ++-- > > mm/page_alloc.c | 56 +++++++++++++++++++----------------- > > 5 files changed, 91 insertions(+), 55 deletions(-) > > > > diff --git a/include/linux/kasan.h b/include/linux/kasan.h > > index b1678a61e6a7..38061673e6ac 100644 > > --- a/include/linux/kasan.h > > +++ b/include/linux/kasan.h > > @@ -2,6 +2,7 @@ > > #ifndef _LINUX_KASAN_H > > #define _LINUX_KASAN_H > > > > +#include > > #include > > #include > > > > @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} > > > > #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ > > > > -#ifdef CONFIG_KASAN > > - > > -struct kasan_cache { > > - int alloc_meta_offset; > > - int free_meta_offset; > > - bool is_kmalloc; > > -}; > > - > > #ifdef CONFIG_KASAN_HW_TAGS > > > > DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); > > @@ -101,11 +94,18 @@ static inline bool kasan_has_integrated_init(void) > > return kasan_enabled(); > > } > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); > > +void kasan_free_pages(struct page *page, unsigned int order); > > + > > #else /* CONFIG_KASAN_HW_TAGS */ > > > > static inline bool kasan_enabled(void) > > { > > +#ifdef CONFIG_KASAN > > return true; > > +#else > > + return false; > > +#endif > > } > > > > static inline bool kasan_has_integrated_init(void) > > @@ -113,8 +113,30 @@ static inline bool kasan_has_integrated_init(void) > > return false; > > } > > > > +static __always_inline void kasan_alloc_pages(struct page *page, > > + unsigned int order, gfp_t flags) > > +{ > > + /* Only available for integrated init. */ > > + BUILD_BUG(); > > +} > > + > > +static __always_inline void kasan_free_pages(struct page *page, > > + unsigned int order) > > +{ > > + /* Only available for integrated init. */ > > + BUILD_BUG(); > > +} > > + > > #endif /* CONFIG_KASAN_HW_TAGS */ > > > > +#ifdef CONFIG_KASAN > > + > > +struct kasan_cache { > > + int alloc_meta_offset; > > + int free_meta_offset; > > + bool is_kmalloc; > > +}; > > + > > slab_flags_t __kasan_never_merge(void); > > static __always_inline slab_flags_t kasan_never_merge(void) > > { > > @@ -130,20 +152,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) > > __kasan_unpoison_range(addr, size); > > } > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); > > -static __always_inline void kasan_alloc_pages(struct page *page, > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); > > +static __always_inline void kasan_poison_pages(struct page *page, > > unsigned int order, bool init) > > { > > if (kasan_enabled()) > > - __kasan_alloc_pages(page, order, init); > > + __kasan_poison_pages(page, order, init); > > } > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init); > > -static __always_inline void kasan_free_pages(struct page *page, > > - unsigned int order, bool init) > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); > > +static __always_inline void kasan_unpoison_pages(struct page *page, > > + unsigned int order, bool init) > > { > > if (kasan_enabled()) > > - __kasan_free_pages(page, order, init); > > + __kasan_unpoison_pages(page, order, init); > > } > > > > void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > > @@ -285,21 +307,15 @@ void kasan_restore_multi_shot(bool enabled); > > > > #else /* CONFIG_KASAN */ > > > > -static inline bool kasan_enabled(void) > > -{ > > - return false; > > -} > > -static inline bool kasan_has_integrated_init(void) > > -{ > > - return false; > > -} > > static inline slab_flags_t kasan_never_merge(void) > > { > > return 0; > > } > > static inline void kasan_unpoison_range(const void *address, size_t size) {} > > -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} > > -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} > > +static inline void kasan_poison_pages(struct page *page, unsigned int order, > > + bool init) {} > > +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, > > + bool init) {} > > static inline void kasan_cache_create(struct kmem_cache *cache, > > unsigned int *size, > > slab_flags_t *flags) {} > > diff --git a/mm/kasan/common.c b/mm/kasan/common.c > > index 6bb87f2acd4e..0ecd293af344 100644 > > --- a/mm/kasan/common.c > > +++ b/mm/kasan/common.c > > @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) > > return 0; > > } > > > > -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) > > { > > u8 tag; > > unsigned long i; > > @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) > > kasan_unpoison(page_address(page), PAGE_SIZE << order, init); > > } > > > > -void __kasan_free_pages(struct page *page, unsigned int order, bool init) > > +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) > > { > > if (likely(!PageHighMem(page))) > > kasan_poison(page_address(page), PAGE_SIZE << order, > > diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c > > index 4004388b4e4b..45e552cb9172 100644 > > --- a/mm/kasan/hw_tags.c > > +++ b/mm/kasan/hw_tags.c > > @@ -238,6 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, > > return &alloc_meta->free_track[0]; > > } > > > > +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) > > +{ > > + bool init = !want_init_on_free() && want_init_on_alloc(flags); > > This check is now duplicated. One check here, the same one in > page_alloc.c. Please either add a helper that gets used in both > places, or at least a comment that the checks must be kept in sync. Added a comment in v4. > > + > > + kasan_unpoison_pages(page, order, init); > > +} > > + > > +void kasan_free_pages(struct page *page, unsigned int order) > > +{ > > + bool init = want_init_on_free(); > > Same here. Likewise. > > + > > + kasan_poison_pages(page, order, init); > > +} > > + > > #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) > > > > void kasan_set_tagging_report_once(bool state) > > diff --git a/mm/mempool.c b/mm/mempool.c > > index a258cf4de575..0b8afbec3e35 100644 > > --- a/mm/mempool.c > > +++ b/mm/mempool.c > > @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > kasan_slab_free_mempool(element); > > else if (pool->alloc == mempool_alloc_pages) > > - kasan_free_pages(element, (unsigned long)pool->pool_data, false); > > + kasan_poison_pages(element, (unsigned long)pool->pool_data, > > + false); > > } > > > > static void kasan_unpoison_element(mempool_t *pool, void *element) > > @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) > > if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) > > kasan_unpoison_range(element, __ksize(element)); > > else if (pool->alloc == mempool_alloc_pages) > > - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); > > + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, > > + false); > > } > > > > static __always_inline void add_element(mempool_t *pool, void *element) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index aaa1655cf682..6e82a7f6fd6f 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; > > static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > > > /* > > - * Calling kasan_free_pages() only after deferred memory initialization > > + * Calling kasan_poison_pages() only after deferred memory initialization > > * has completed. Poisoning pages during deferred memory init will greatly > > * lengthen the process and cause problem in large memory systems as the > > * deferred pages initialization is done with interrupt disabled. > > @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); > > * on-demand allocation and then freed again before the deferred pages > > * initialization is done, but this is not likely to happen. > > */ > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > - bool init, fpi_t fpi_flags) > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > { > > - if (static_branch_unlikely(&deferred_pages)) > > - return; > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > - return; > > - kasan_free_pages(page, order, init); > > + return static_branch_unlikely(&deferred_pages) || > > + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > } > > > > /* Returns true if the struct page for the pfn is uninitialised */ > > @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) > > return false; > > } > > #else > > -static inline void kasan_free_nondeferred_pages(struct page *page, int order, > > - bool init, fpi_t fpi_flags) > > +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) > > { > > - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > - (fpi_flags & FPI_SKIP_KASAN_POISON)) > > - return; > > - kasan_free_pages(page, order, init); > > + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && > > + (fpi_flags & FPI_SKIP_KASAN_POISON)); > > } > > > > static inline bool early_page_uninitialised(unsigned long pfn) > > @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, > > unsigned int order, bool check_free, fpi_t fpi_flags) > > { > > int bad = 0; > > - bool init; > > + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); > > > > VM_BUG_ON_PAGE(PageTail(page), page); > > > > @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, > > * With hardware tag-based KASAN, memory tags must be set before the > > * page becomes unavailable via debug_pagealloc or arch_free_page. > > */ > > - init = want_init_on_free(); > > - if (init && !kasan_has_integrated_init()) > > - kernel_init_free_pages(page, 1 << order); > > - kasan_free_nondeferred_pages(page, order, init, fpi_flags); > > + if (kasan_has_integrated_init()) { > > Is it guaranteed that this branch will be eliminated when > kasan_has_integrated_init() is static inline returning false? I know > this works with macros, but I don't remember seeing cases with static > inline functions. I guess it's the same, but mentioning just in case > because BUILD_BUG() stood out. Here's one example of where we rely on optimization after inlining to eliminate a BUILD_BUG(): https://github.com/torvalds/linux/blob/3224374f7eb08fbb36d3963895da20ff274b8e6a/arch/arm64/include/asm/arm_dsu_pmu.h#L123 Peter