From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72C16CCA47C for ; Tue, 12 Jul 2022 13:14:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229684AbiGLNOg (ORCPT ); Tue, 12 Jul 2022 09:14:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229617AbiGLNOd (ORCPT ); Tue, 12 Jul 2022 09:14:33 -0400 Received: from mail-yb1-xb31.google.com (mail-yb1-xb31.google.com [IPv6:2607:f8b0:4864:20::b31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3EC0920BFA for ; Tue, 12 Jul 2022 06:14:32 -0700 (PDT) Received: by mail-yb1-xb31.google.com with SMTP id 64so13826574ybt.12 for ; Tue, 12 Jul 2022 06:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=BonEn6uOwekFl3zwAk2Kis9tLSCC45eUpmb7zWo4VJI=; b=Ei+65oY0/L+YeMQ1FjVks7XnHIa0H/pol34HBHs2FcL9Q7phxIqIPLH9RPDjkjPACp I+qWzwPIn0D24e++W3Y4eVpA7sk5tc2GxOj3Guu/jzK58jQSuc99e6CfhxQucydm6p2F PUU6gG0AwT5w8nYdpIhdSvN+t6gR1hJYmeaCWGCDhuifOxPqP8WGTFp9BPgG+FA9kovz SUqzBjp7nSm8W3HAeCK01MGJ6SyNABUd8BRocKOkPOzCaMvr64kAPXcLeF0lqvBtFumi 2/VRcnqTQ1W+9XHhczLWY0VAK+4ura4IXhE0342iVV6consMox/cv14hJVnDR28p639j RbBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=BonEn6uOwekFl3zwAk2Kis9tLSCC45eUpmb7zWo4VJI=; b=Wq/uGFvx+3RXwpxUA8dpPitxeoETuDTDB3HWYLBlynASpbdghLh74UbhRdai/uRlQd g/jEgSQ2SfXn5WO6lxKuVvrB3rmzAUa5fPJgWQgJIbkELn7O6O/yrLu+KHHoyYNzetOM nAxN7ktQs30rZudrPyfhydJcekqLfw5ZdmVCqra/2p7O7y0Fm5mqlB5bCuuvxI1mFbyN YHwmWnTlWNUfsO6W0krulQiUyPx7SqJuzYZrlSOONB8Oug7KgTILXJ7FFOSCymZkpCnQ DPJ+XPLO1UOyPI5w092vUugqgtbILbYXQOZLVYcPtkhcuFkBm+GPgvrzIgJpM/GTyhgv a2Kg== X-Gm-Message-State: AJIora9pmXQVqk4/PzlQ4MDlEiThJ4j5kjt/faNY6YMbZGF1K7guzR7u nOVjWaBlyannB9J8K6Bsa15wz2WlZfTFyroAT5XLBQ== X-Google-Smtp-Source: AGRyM1t3MUCp/CV5V+cpnchkexRCwEh09OyfLbEs5YJ1MMEogJ1Ly4GZ0Kh71upFxhlQLBhCzmmahxJwZPCVgMqv2po= X-Received: by 2002:a25:2d59:0:b0:66e:32d3:7653 with SMTP id s25-20020a252d59000000b0066e32d37653mr22288782ybe.625.1657631671169; Tue, 12 Jul 2022 06:14:31 -0700 (PDT) MIME-Version: 1.0 References: <20220701142310.2188015-1-glider@google.com> <20220701142310.2188015-16-glider@google.com> In-Reply-To: <20220701142310.2188015-16-glider@google.com> From: Marco Elver Date: Tue, 12 Jul 2022 15:13:55 +0200 Message-ID: Subject: Re: [PATCH v4 15/45] mm: kmsan: call KMSAN hooks from SLUB code To: Alexander Potapenko Cc: Alexander Viro , Alexei Starovoitov , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 1 Jul 2022 at 16:23, 'Alexander Potapenko' via kasan-dev wrote: > > In order to report uninitialized memory coming from heap allocations > KMSAN has to poison them unless they're created with __GFP_ZERO. > > It's handy that we need KMSAN hooks in the places where > init_on_alloc/init_on_free initialization is performed. > > In addition, we apply __no_kmsan_checks to get_freepointer_safe() to > suppress reports when accessing freelist pointers that reside in freed > objects. > > Signed-off-by: Alexander Potapenko Reviewed-by: Marco Elver But see comment below. > --- > v2: > -- move the implementation of SLUB hooks here > > v4: > -- change sizeof(type) to sizeof(*ptr) > -- swap mm: and kmsan: in the subject > -- get rid of kmsan_init(), replace it with __no_kmsan_checks > > Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e > --- > include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++ > mm/kmsan/hooks.c | 80 +++++++++++++++++++++++++++++++++++++++++++ > mm/slab.h | 1 + > mm/slub.c | 18 ++++++++++ > 4 files changed, 156 insertions(+) > > diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h > index 699fe4f5b3bee..fd76cea338878 100644 > --- a/include/linux/kmsan.h > +++ b/include/linux/kmsan.h > @@ -15,6 +15,7 @@ > #include > > struct page; > +struct kmem_cache; > > #ifdef CONFIG_KMSAN > > @@ -72,6 +73,44 @@ void kmsan_free_page(struct page *page, unsigned int order); > */ > void kmsan_copy_page_meta(struct page *dst, struct page *src); > > +/** > + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. > + * @s: slab cache the object belongs to. > + * @object: object pointer. > + * @flags: GFP flags passed to the allocator. > + * > + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the > + * newly created object, marking it as initialized or uninitialized. > + */ > +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); > + > +/** > + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. > + * @s: slab cache the object belongs to. > + * @object: object pointer. > + * > + * KMSAN marks the freed object as uninitialized. > + */ > +void kmsan_slab_free(struct kmem_cache *s, void *object); > + > +/** > + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. > + * @ptr: object pointer. > + * @size: object size. > + * @flags: GFP flags passed to the allocator. > + * > + * Similar to kmsan_slab_alloc(), but for large allocations. > + */ > +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); > + > +/** > + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. > + * @ptr: object pointer. > + * > + * Similar to kmsan_slab_free(), but for large allocations. > + */ > +void kmsan_kfree_large(const void *ptr); > + > /** > * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. > * @start: start of vmapped range. > @@ -138,6 +177,24 @@ static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) > { > } > > +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, > + gfp_t flags) > +{ > +} > + > +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) > +{ > +} > + > +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, > + gfp_t flags) > +{ > +} > + > +static inline void kmsan_kfree_large(const void *ptr) > +{ > +} > + > static inline void kmsan_vmap_pages_range_noflush(unsigned long start, > unsigned long end, > pgprot_t prot, > diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c > index 070756be70e3a..052e17b7a717d 100644 > --- a/mm/kmsan/hooks.c > +++ b/mm/kmsan/hooks.c > @@ -26,6 +26,86 @@ > * skipping effects of functions like memset() inside instrumented code. > */ > > +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) > +{ > + if (unlikely(object == NULL)) > + return; > + if (!kmsan_enabled || kmsan_in_runtime()) > + return; > + /* > + * There's a ctor or this is an RCU cache - do nothing. The memory > + * status hasn't changed since last use. > + */ > + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) > + return; > + > + kmsan_enter_runtime(); > + if (flags & __GFP_ZERO) > + kmsan_internal_unpoison_memory(object, s->object_size, > + KMSAN_POISON_CHECK); > + else > + kmsan_internal_poison_memory(object, s->object_size, flags, > + KMSAN_POISON_CHECK); > + kmsan_leave_runtime(); > +} > +EXPORT_SYMBOL(kmsan_slab_alloc); > + > +void kmsan_slab_free(struct kmem_cache *s, void *object) > +{ > + if (!kmsan_enabled || kmsan_in_runtime()) > + return; > + > + /* RCU slabs could be legally used after free within the RCU period */ > + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) > + return; > + /* > + * If there's a constructor, freed memory must remain in the same state > + * until the next allocation. We cannot save its state to detect > + * use-after-free bugs, instead we just keep it unpoisoned. > + */ > + if (s->ctor) > + return; > + kmsan_enter_runtime(); > + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, > + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); > + kmsan_leave_runtime(); > +} > +EXPORT_SYMBOL(kmsan_slab_free); > + > +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) > +{ > + if (unlikely(ptr == NULL)) > + return; > + if (!kmsan_enabled || kmsan_in_runtime()) > + return; > + kmsan_enter_runtime(); > + if (flags & __GFP_ZERO) > + kmsan_internal_unpoison_memory((void *)ptr, size, > + /*checked*/ true); > + else > + kmsan_internal_poison_memory((void *)ptr, size, flags, > + KMSAN_POISON_CHECK); > + kmsan_leave_runtime(); > +} > +EXPORT_SYMBOL(kmsan_kmalloc_large); > + > +void kmsan_kfree_large(const void *ptr) > +{ > + struct page *page; > + > + if (!kmsan_enabled || kmsan_in_runtime()) > + return; > + kmsan_enter_runtime(); > + page = virt_to_head_page((void *)ptr); > + KMSAN_WARN_ON(ptr != page_address(page)); > + kmsan_internal_poison_memory((void *)ptr, > + PAGE_SIZE << compound_order(page), > + GFP_KERNEL, > + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); > + kmsan_leave_runtime(); > +} > +EXPORT_SYMBOL(kmsan_kfree_large); > + > static unsigned long vmalloc_shadow(unsigned long addr) > { > return (unsigned long)kmsan_get_metadata((void *)addr, > diff --git a/mm/slab.h b/mm/slab.h > index db9fb5c8dae73..d0de8195873d8 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -752,6 +752,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > memset(p[i], 0, s->object_size); > kmemleak_alloc_recursive(p[i], s->object_size, 1, > s->flags, flags); > + kmsan_slab_alloc(s, p[i], flags); > } > > memcg_slab_post_alloc_hook(s, objcg, flags, size, p); > diff --git a/mm/slub.c b/mm/slub.c > index b1281b8654bd3..b8b601f165087 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -22,6 +22,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -359,6 +360,17 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) > prefetchw(object + s->offset); > } > > +/* > + * When running under KMSAN, get_freepointer_safe() may return an uninitialized > + * pointer value in the case the current thread loses the race for the next > + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in > + * slab_alloc_node() will fail, so the uninitialized value won't be used, but > + * KMSAN will still check all arguments of cmpxchg because of imperfect > + * handling of inline assembly. > + * To work around this problem, we apply __no_kmsan_checks to ensure that > + * get_freepointer_safe() returns initialized memory. > + */ > +__no_kmsan_checks > static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) > { > unsigned long freepointer_addr; > @@ -1709,6 +1721,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) > ptr = kasan_kmalloc_large(ptr, size, flags); > /* As ptr might get tagged, call kmemleak hook after KASAN. */ > kmemleak_alloc(ptr, size, 1, flags); > + kmsan_kmalloc_large(ptr, size, flags); > return ptr; > } > > @@ -1716,12 +1729,14 @@ static __always_inline void kfree_hook(void *x) > { > kmemleak_free(x); > kasan_kfree_large(x); > + kmsan_kfree_large(x); > } > > static __always_inline bool slab_free_hook(struct kmem_cache *s, > void *x, bool init) > { > kmemleak_free_recursive(x, s->flags); > + kmsan_slab_free(s, x); > > debug_check_no_locks_freed(x, s->object_size); > > @@ -3756,6 +3771,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, > */ > slab_post_alloc_hook(s, objcg, flags, size, p, > slab_want_init_on_alloc(flags, s)); > + Remove unnecessary whitespace change.